Moral Machine's perspective on autonomous vehicles
Artificial intelligence decision-making programming is judged by the public in a new study from a US university.
The public’s perspective on which decisions autonomous cars should make in fatal situations is being surveyed by the Massachusetts Institute of Technology (MIT).
MIT’s ‘Moral Machine’ poses numerous scenarios to the public in which an autonomous vehicle would need to decide who to kill. Respondents are given two choices, and in each, lives must be lost – there is no non-fatal option. To make each scenario and the victims of each clear, a written explanation is provided, in addition to the graphic demonstration.
Click here to read the full story
RELATED ARTICLES
Deep Drive Podcast: Why Carmakers are Planning to go Big on Hybrids
With EV demand lagging, rising hybrid interest is reshaping carmakers’ strategies for sustainable mobility.
Day 1 | Vehicle Lightweighting Conference | Global Trends in Vehicle Lightweighting
Autocar Professional's annual industry seminar on lightweighting dives deep into the nuances of slashing weight to reduc...
Day 2 | Vehicle Lightweighting Conference | Manufacturing Innovations in Lightweighting
In this session of the two-day virtual seminar, we bring together industry experts to shed light on the process innovati...