Abstract

There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.

Highlights

  • When Google maps tells you to turn right in 200 metres, you don’t wonder “why” it is giving you that instruction

  • We identify three points of potential discrepancy or confusion that we will give further analytical attention in this paper from a processual, organizational perspective. We summarise these points as questions: Who is the user of an explanation in Explainable Artificial Intelligence (AI), and what difference does this make for the nature of the explanation? For what purposes could an explanation from AI be useful? And Where and when in time does an explanation reside in Explainable AI? We consider in greater detail what these questions mean and what reflections they prompt

  • There is particular concern regarding the problem of inscrutable, black box AI (Introna, 2016, Faraj et al, 2018, Orlikowski, 2016)

Read more

Summary

INTRODUCTION

When Google maps tells you to turn right in 200 metres, you don’t wonder “why” it is giving you that instruction. We aim to bring an organizational perspective to the Explainable AI (XAI) research agenda In this discussion paper, we show that the notion of “explanation” is emerging at the core of multi-disciplinary responses to the problem of opaque “black box” deep learning algorithms (Burrell, 2016). We show that the notion of “explanation” is emerging at the core of multi-disciplinary responses to the problem of opaque “black box” deep learning algorithms (Burrell, 2016) Das Erstellen und Weitergeben von Kopien dieses PDFs ist nicht zulässig

THE PROBLEM
EXPLAINABLE AI
BRINGING AN ORGANIZATIONAL PERSPECTIVE TO XAI RESEARCH
WHO IS THE “USER” IN EXPLAINABLE AI?
WHAT IS THE “PURPOSE” OF AN EXPLANATION IN EXPLAINABLE AI?
WHERE AND WHEN DOES AN EXPLANATION “RESIDE” IN EXPLAINABLE AI?
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.