Abstract

Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called Explanatory Pragmatism, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.

Highlights

  • Medicine and healthcare are often highlighted as some of the most promising domains of application for artificial intelligence (AI)

  • We have proposed a pragmatist account of AI explainability

  • We have used it to classify five distinct challenges to explainability, as well as to elucidate the requirements for adequate explanations that arise in medical contexts with regards to three different purposes

Read more

Summary

Introduction

Medicine and healthcare are often highlighted as some of the most promising domains of application for artificial intelligence (AI). In one recent study researchers used machine learning to build a prognostic model to predict whether patients at a military hospital in Beijing suffering

13 Page 2 of 15
13 Page 8 of 15
13 Page 10 of 15
13 Page 12 of 15
Summary
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.