Abstract

This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.

Highlights

  • Automated vehicles (AVs) are expected to encounter road-traffic scenarios which present moral dilemmas.1 There is a dispute about what kind of moral dilemmas AVs will encounter

  • This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles

  • The confusion about trolley cases and their role in AV ethics is explained by two factors

Read more

Summary

Introduction

Automated vehicles (AVs) are expected to encounter road-traffic scenarios which present moral dilemmas. There is a dispute about what kind of moral dilemmas AVs will encounter. Automated vehicles (AVs) are expected to encounter road-traffic scenarios which present moral dilemmas.. There is a dispute about what kind of moral dilemmas AVs will encounter. What some people have in mind are dilemmas where harming at least one person is unavoidable; and a choice is required about how to distribute harms or risks of harms between multiple persons whose interests are in conflict (Goodall 2014; Gurney 2016; Lin 2016; Leben 2017; Keeling 2018a, b). What others have in mind are dilemmas that arise in normal driving. A scenario in which the AV must decide how heavily to brake when approaching a crossing given uncertainty about whether a pedestrian will step into the road

Keeling
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call