Abstract

We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end, and we examine whether—and how—it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.

Highlights

  • Sophisticated artificially intelligent (AI) systems are spreading to evermore sensitive areas of human life

  • Though much of what we argue may hold for different forms of responsibility, we are concerned with responsibility primarily in the sense of appropriate praise—or blameworthiness, as exemplified by Shoemaker’s (2015, 113) notion of accountability: “One is an accountable agent just in case one is liable for being a fitting target of a subset of responsibility responses to one – a subset organized around the paradigm sentimental syndrome pair of agential anger/gratitude – in virtue of one’s quality of regard.”9 In the following, when speaking of responsibility, accountability is what we have in mind

  • In the case we focus on in this paper, it may be that Herbert the human resources (HR) manager is responsible for rejecting April’s application, but not responsible for discriminating against her—in terms of the “action under a description” terminology, he may be responsible for his decision or action under the description “rejecting April’s application,” but not under the description “discriminating against April.”

Read more

Summary

Introduction

Sophisticated artificially intelligent (AI) systems are spreading to evermore sensitive areas of human life. A whole host of papers revolve around problems like those mentioned in the previous paragraphs; they provide arguments for XAI from the broader context of morality or society in general (e.g., Asaro, 2015; Binns et al, 2018; Cave et al, 2018; Floridi et al, 2018; Langer, Oster, et al, 2021; Lipton, 2018; Wachter et al, 2017) These discussions do not always tell us how exactly we can get from a need for reasonable trust, human autonomy, accountability, responsibility, or the like, to a requirement for explainable AI systems. By appealing to the epistemic condition on moral responsibility, we substantiate the claim that the outputs of many such decision support

12 Page 4 of 30
The Challenge of Adequate Responsibility Attribution
12 Page 6 of 30
13 Related worries about moral responsibility can be raised by other cases
12 Page 8 of 30
Why We Need Someone in the Loop
12 Page 10 of 30
12 Page 12 of 30
Connecting Responsibility to Explainability
12 Page 14 of 30
12 Page 16 of 30
The Advantages of Reason Explanations
12 Page 18 of 30
12 Page 20 of 30
12 Page 22 of 30
12 Page 24 of 30
Open Questions and Future Work
12 Page 28 of 30
12 Page 30 of 30
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.