Moral Machine's perspective on autonomous vehicles
Artificial intelligence decision-making programming is judged by the public in a new study from a US university.
The public’s perspective on which decisions autonomous cars should make in fatal situations is being surveyed by the Massachusetts Institute of Technology (MIT).
MIT’s ‘Moral Machine’ poses numerous scenarios to the public in which an autonomous vehicle would need to decide who to kill. Respondents are given two choices, and in each, lives must be lost – there is no non-fatal option. To make each scenario and the victims of each clear, a written explanation is provided, in addition to the graphic demonstration.
Click here to read the full story
RELATED ARTICLES
Deep Drive Podcast: Car Sales in FY2025 Analysis Video
We analyse car sales for FY2025 and talk about data, trends, and the winners and losers.
'Car Care Market in India Likely to Register 4.7% CAGR by 2030': Vivek Mohan
In this interview, Vivek Mohan, Director, Vista Auto Care, reveals the factors that are driving the significant growth i...
Deep Drive Podcast: Why Star Ratings aren’t the Ultimate Stamp of Safety
The latest episode of the Autocar Deep Drive Podcast explores whether crash test ratings truly reflect the full picture ...