Moral Machine's perspective on autonomous vehicles
Artificial intelligence decision-making programming is judged by the public in a new study from a US university.
The public’s perspective on which decisions autonomous cars should make in fatal situations is being surveyed by the Massachusetts Institute of Technology (MIT).
MIT’s ‘Moral Machine’ poses numerous scenarios to the public in which an autonomous vehicle would need to decide who to kill. Respondents are given two choices, and in each, lives must be lost – there is no non-fatal option. To make each scenario and the victims of each clear, a written explanation is provided, in addition to the graphic demonstration.
Click here to read the full story
RELATED ARTICLES
Panel Discussion | Alternative Fuels Powering Sustainable Mobility | Future Powertrain Conclave
In this discussion, industry experts delve into the topic of alternative fuels such as biogas, ethanol, and hydrogen, an...
Panel Discussion | Future Tech and Innovations Driving EV Demand | Future Powertrain Conclave
In this session, EV startup CEOs and policymakers discuss the advancements in EV technology, and solutions to navigate c...
Panel Discussion | Hybrid Powertrains as a Transition to Electrification | Future Powertrain Conclave
The panel discussion highlights the importance of hybrid powertrains, including range extenders and PHEVs, as viable mea...