Abstract

This article is about the role of factual uncertainty for moral decision-making as it concerns the ethics of machine decision-making (i.e., decisions by AI systems, such as autonomous vehicles, autonomous robots, or decision support systems). The view that is defended here is that factual uncertainties require a normative evaluation and that ethics of machine decision faces a triple-edged problem, which concerns what a machine ought to do, given its technical constraints, what decisional uncertainty is acceptable, and what trade-offs are acceptable to decrease the decisional uncertainty.

Highlights

  • Uncertainty yields problems for moral decision-making in at least two ways

  • I will focus on factual uncertainty, as it concerns the ethics of machine decisions

  • Once we know what we ought to do in idealized cases, we can analyze what to do based on a theory of rational decision-making for situations involving factual uncertainty

Read more

Summary

Introduction

Uncertainty yields problems for moral decision-making in at least two ways. First, we have the issue of ‘moral uncertainty’, which is uncertainty about which normative principles should guide what we ought to do. Second, we have the issue of ‘factual uncertainty’, which is uncertainty about the (possible) state of affairs or the. Once we know what we ought to do in idealized cases, we can analyze what to do based on a theory of rational decision-making for situations involving factual uncertainty. There is a large literature on the ethics of crashing with autonomous vehicle(s), which is concerned with the ethics of machine decisionmaking for autonomous vehicles in situations of an unavoidable crash.7 In this context, proponents that explicitly or implicitly adhere to the standard approach mostly discuss so-called ‘applied trolley problems’. Machine is constituted affects its decision-making abilities and—at the same time— it can yield potential harms (or so I will argue) This trilemma is what I call the ‘input-selection problem’, which concerns the question of which inputs that are needed (for ethical decision-making with sufficient certainty) and which inputs are acceptable (granted the possible harms of using those inputs). To simplify the language, I will often refer to normative ethical questions as normative questions (and likewise for similar formulation), even if normative questions are not limited to ethics

The standard or the uncertainty approach?
The grandma problem
Transparency
Privacy and data protection
Time‐sensitive decisions
So what should we do?
Findings
Summation and conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.