Due to fast-developing technology and its endless promises, autonomous systems are heading increasingly towards complex algorithms aimed at solving situations requiring some form of moral reasoning. Autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of their tasks – ones having to deal with human lives – that they must carry out.
When it comes to discussion around the ethics of machines, the focus is often put on extreme examples (such as the above mentioned projects) where human life and death are involved. But what about more mundane and insignificant objects of our everyday lives? Soon, “smart” objects might also need to have moral capacities as “they know too much” about their surroundings to take a neutral stance. Indeed, with fields such as home automation, ambient intelligence or the Internet of Things, objects of our everyday lives will have more and more access to a multitude of data about ourselves and our environment.
If a “smart” coffee machine knows about its user’s heart problems, should it accept giving him a coffee when he requests one?
Even with such a banal situation, the level of complexity of such products cannot accommodate all parties. The system will be designed to take into account certain inputs, to process a ‘certain’ type of information under a ‘certain’ kind of logic.
How are these “certainties” defined, and by whom? How are these autonomous systems going to be able to solve problems without objective answers? And, moreover, as the nature of ethics is very subjective, how will machines be able to deal with the variety of profiles, beliefs, and cultures?
The “Ethical Things” project looks at how an object, facing everyday ethical dilemmas, can keep a dose of humanity in its final decision while staying flexible enough to accommodate various ethical beliefs.
In order to achieve that, our “ethical fan” connects to a crowd-sourcing website every time it faces an ethical dilemma. It posts the dilemma it’s facing and awaits the help of one of the “workers”, or mechanical turks, who will tell the fan how to behave. Thus, it assures that the decision executed by the system is the fruit of real human moral reasoning.
Moreover, the fan is designed to let the user set various traits (such as religion, degree, sex, and age) as criterion to choose the worker who should respond to the dilemma, in order to assure that a part of the user’s culture and belief system is in line with the worker, or ethical agent.
(Should it be a middle-aged Muslim male with a PhD or a young Atheist female?)