Abstract

AbstractSelf‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assessment of recent empirical work on lay‐people's attitudes about crash algorithms relevant to the ethical issue of crash optimization. Finally, the article discusses what traditional ethical theories such as utilitarianism, Kantianism, virtue ethics, and contractualism imply about how cars should handle crash scenarios. The aim of the article is to provide an overview of the existing literature on these topics and to assess how far the discussion has gotten so far.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call