Abstract

I defend a recognize-everyone’s-interests solution to a corporate ethics dilemma about designing and manufacturing driverless cars. The dilemma relates to survey participants’ (1) belief that driverless cars should kill fewer people in unavoidable crashes in which cars face a choice about whom to kill or spare but (2) attitude that they would not buy a car that would kill them in order to spare a larger number of people (Bonnefon, et al., 2016). According to my argument, this dilemma derives from a shortcoming in the classic formulation of the trolley problem. The trolley problem assumes, without proving, that it is ethical to kill one person in order to spare a larger number of people (Thomson, 1985). I seek to establish the opposite claim. I call this endeavor–how to prove that it is unethical to kill one in order to spare five–the new trolley problem. My proportional-risk approach addresses it by requiring managers to recognize the interests of all people affected in unavoidable-crash scenarios: dividing the risk of harm among everyone exposed to that risk. Automatically killing one in order to spare five is unethical, then, because it ignores the interests of the single person. When controlled by the recognize-everyone’s-interests algorithm, cars will probably (but not automatically) kill fewer people. My approach thus solves the new trolley problem while satisfying both prongs of the dilemma about driverless cars. I conclude by discussing legal liability issues in implementing the recognize-everyone’s-interests algorithm in driverless cars.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call