Abstract

The autonomous vehicle (AV) is one of the first commercialized AI-embedded robots to make autonomous decisions. Despite technological advancements, unavoidable AV accidents that result in life-and-death consequences cannot be completely eliminated. The emerging social concern of how an AV should make ethical decisions during unavoidable accidents is referred to as the moral dilemma of AV, which has promoted heated discussions among various stakeholders. However, there are research gaps in explainable AV ethical decision-making processes that predict how AVs’ moral behaviors are made that are acceptable from the AV users’ perspectives. This study addresses the key question: What factors affect ethical behavioral intentions in the AV moral dilemma? To answer this question, this study draws theories from multidisciplinary research fields to propose the “Integrative ethical decision-making framework for the AV moral dilemma.” The framework includes four interdependent ethical decision-making stages: AV moral dilemma issue framing, intuitive moral reasoning, rational moral reasoning, and ethical behavioral intention making. Further, the framework includes variables (e.g., perceived moral intensity, individual factors, and personal moral philosophies) that influence the ethical decision-making process. For instance, the framework explains that AV users from Eastern cultures will tend to endorse a situationist ethics position (high idealism and high relativism), which views that ethical decisions are relative to context, compared to AV users from Western cultures. This proposition is derived from the link between individual factors and personal moral philosophy. Moreover, the framework proposes a dual-process theory, which explains that both intuitive and rational moral reasoning are integral processes of ethical decision-making during the AV moral dilemma. Further, this framework describes that ethical behavioral intentions that lead to decisions in the AV moral dilemma are not fixed, but are based on how an individual perceives the seriousness of the situation, which is shaped by their personal moral philosophy. This framework provides a step-by-step explanation of how pluralistic ethical decision-making occurs, reducing the abstractness of AV moral reasoning processes.

Highlights

  • With recent artificial intelligence (AI) advancements, robots are expanding from conducting predefined tasks in confined environments to becoming autonomous agents in real-world contexts

  • To provide holistic explanations of ethical decision-making during the autonomous vehicle (AV) moral dilemma, this study aims to develop a comprehensive AV ethics framework by integrating both the intuitionist and rationalist moral reasoning approaches and understanding how individual and situational characteristics affect ethical decision-making phases

  • perceived moral intensity (PMI) includes the perception of risk and uncertainty, important features to consider during AV accidents Kruegel and Uhl (2020); these two latter concepts were not included separately

Read more

Summary

Introduction

With recent artificial intelligence (AI) advancements, robots are expanding from conducting predefined tasks in confined environments to becoming autonomous agents in real-world contexts. AVs are expected to increase road safety by reducing the number of accidents and severity of crash consequences by making more rational decisions (Anderson et al, 2014; Kumfer and Burgess, 2015; Nyholm and Smids, 2016; Gogoll and Müller, 2017; Hulse et al, 2018). Despite these technological advancements, AV accidents cannot be entirely eliminated (Goodall, 2014b; Bonnefon et al, 2016; Guo et al, 2018; Nyholm and Smids, 2018). Understanding the human moral reasoning will provide possible explanations of how moral judgment will occur in the AV moral dilemma

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call