Home → News → Mit's 'moral Machine' Crowdsources Decisions About... → Full Text

Mit's 'moral Machine' Crowdsources Decisions About Autonomous Driving, but Experts Call It Misguided

By TechRepublic

October 20, 2016

[article image]

The Massachusetts Institute of Technology's (MIT) Media Lab has developed a "Moral Machine" platform that enables the public to voice their opinions on what kind of ethical "decisions" autonomous vehicles should be programmed to make.

The Moral Machine lets participants view a "moral dilemma" and choose a preferred outcome, and then compare their decision with those of others for online discussion.

MIT professor Iyad Rahwan says the platform is designed to "further our scientific understanding of how people think about machine morality."

However, skeptics such as Gartner analyst Michael Ramsey see flaws in this model, because panicked human drivers typically do not and cannot make the kinds of moral choices the platform presents. "The most likely scenario is that the car will be programmed to avoid a collision, without regard to 'whom to save,'" Ramsey says.

Moreover, University of Southern California professor Jeffrey Miller believes the platform encompasses too few scenarios with autonomous vehicles. "There are more mundane decisions about breaking the law in order to be safe...[that] happen with frequency with human drivers," Miller notes.

Rahwan says the platform has collected 14 million decisions from 2 million participants, which will help cohere a global view of machine ethics and highlight cross-cultural differences.

From TechRepublic
View Full Article


Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


No entries found