Abstract

In this paper, I will argue that automated vehicles should not swerve to avoid a person or vehicle in its path, unless they can do so without imposing risks onto others. I will argue that this is the conclusion that we should reach even if we start by assuming that we should divert the trolley in the standard trolley case (in which the trolley will hit and kill five people on the track, unless it is diverted onto a different track, where it will hit and kill just one person). In defence of this claim, I appeal to the distribution of moral and legal responsibilities, highlighting the importance of safe spaces, and arguing in favour of constraints on what can be done to minimise casualties. My arguments draw on the methodology associated with the trolley problem. As such, this paper also defends this methodology, highlighting a number of ways in which authors misunderstand and misrepresent the trolley problem. For example, the ‘trolley problem’ is not the ‘name given by philosophers to classic examples of unavoidable crash scenarios, historically involving runaway trolleys’, as Millar suggests, and trolley cases should not be compared with ‘model building in the (social) sciences’, as Gogoll and Müller suggest. Trolley cases have more in common with lab experiments than model building, and the problem referred to in the trolley problem is not the problem of deciding what to do in any one case. Rather, it refers to the problem of explaining what appear to be conflicting intuitions when we consider two cases together. The problem, for example, could be: how do we justify the claim that automated vehicles should not swerve even if we accept the claim that we should divert the trolley in an apparently similar trolley case?

Highlights

  • Most authors who appeal to the trolley problem when discussing automated vehicles appeal to something like Judith Jarvis Thomson’s Bystander at the Switch case (Thomson 1985, p. 1397), which I will call Switch:1 SwitchA trolley is heading towards five individuals on the track

  • I will appeal to these differences to justify the claim that we should not programme cars to swerve in dilemma cases, even if we assume that we should divert the trolley in Switch

  • Step one: we identify a trolley case which captures ‘the correct set of variables’, which will function as our ‘model’

Read more

Summary

Introduction

Most authors who appeal to the trolley problem when discussing automated vehicles appeal to something like Judith Jarvis Thomson’s Bystander at the Switch case (Thomson 1985, p. 1397), which I will call Switch:. I am appealing to particular legal responsibilities, suggesting that these are good laws, and that we have good reason to support laws which distribute responsibilities in this way Another issue that some have discussed in relation to the ethics of automated vehicles is the question of whether it is legitimate to put the safety of the person in the car ahead of the safety of other road users—with the added dimension that there is obviously an incentive for car manufacturers to put the owner’s safety ahead of other people’s, unless legislation takes this choice away.. Let us suppose that the system runs the calculations, resulting in the following judgements

Do not swerve
Swerve
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call