Abstract

The chief purposes of this chapter are to explore the problem of moral uncertainty as it pertains to autonomous vehicles and to outline possible solutions. The problem is the following: How should autonomous vehicles be programmed to act when the person who has the authority to choose the ethics of the autonomous vehicle is under moral uncertainty? Roughly, an agent is morally uncertain when she has access to all (or most) of the relevant non-moral facts, including but not limited to empirical and legal facts, but still remains uncertain about what morality requires of her. We argue that the problem of moral uncertainty in the context of autonomous vehicles is an important problem and then critically engage with two solutions to the problem. We conclude by discussing a solution that we think is more promising—that of the philosopher Andrew Sepielli—and offer some support in its defense.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call