The infamous trolley problem is constantly in our lives. Imagine that you are standing next to some tram tracks and you see a train speeding straight towards five people who are tied to the track. However, you can change the direction of trolley by pushing the lever but there is one person tied to track. What do you do?
It may or not surprise you that the answer differs according to nationality, gender, age and whether the decision is just hypothetical or actually involves real lives. So, would you pull the lever, leading to one death but saving five?
A famous ethical dilemma was recently applied by MIT researchers to the self-driving cars world. In 2014 they created an experiment called Moral Machine.
The idea was to create a game-like platform that would crowdsource people’s decisions on how self-driving cars should prioritize lives in different variations of the “trolley problem.” In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
After four years they got millions of answers and they did not expect such major feedback. The analysis of collected data can be found in new paper ’Nature’.
If autonomous vehicles will become our day to day reality, society will have to deal with brand new issue about which lives to prioritize in case of a crash. The Moral Machine tested different variants similar to trolley problem: humans or pets, more lives or fewer, pedestrians or passengers, young or old, fit or sickly, women or men, higher social status person or lower.
How the decision is affected by the cultural background of decision-making person?
Studies showed that countries in which the culture is more individualistic are more likely to spare young over old. For example, people from countries like Taiwan and China. Participants in Latin America countries, were more likely to spare the young, the fit and the higher status individuals. Across the globe were few trends which are repeatable, save humans over animals, more lives over fewer.
In the last few years more people had thought about the ethics of AI. It can lead to various consequences on different cultural groups. Of course there is no „correct” answer to the given question. However, it is undoubtedly problem in self-driving cars and crashes. What are your thoughts about it? How would you solve this dilemmas?
https://www.insidescience.org/news/moral-dilemmas-self-driving-cars