Abstract

The ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle’s behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of ‘moral positions’ concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle’s ethical decision making.

Highlights

  • Autonomous vehicles (AVs) are shifting from prospect to imminent reality in the eyes of Original Equipment Manufacturers, government institutions, and the general public alike

  • The theory paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle’s behavior, and the vehicle must mitigate these claims as it makes decisions about its environment1

  • The research explored in this article, and the moral and computational approach that it underpins, should not be seen as an ‘ultimate’ normative answer to behavior in autonomous vehicles

Read more

Summary

Introduction

Autonomous vehicles (AVs) are shifting from prospect to imminent reality in the eyes of Original Equipment Manufacturers, government institutions, and the general public alike. In unavoidable crash scenarios, the autonomous vehicle is purported to make a deliberative decision as to how it will crash, supplanting the ineffective and irrational reactions of human drivers (Lin et al 2017) This is a tall order to fill for any artificial decision process, let alone one that is acting within an environment as complex, volatile and unpredictable as any modern traffic community. In spite of these challenges, an implementable solution to effective and acceptable decision making in autonomous vehicles must be found. Many stake-holders, institutions, and drivers have heralded autonomous

Important notice
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.