Abstract

BackgroundWide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare. Despite performance gains and hype, uptake of artificial intelligence (AI) is hindered by these concerns. Explainable AI is thought to help alleviate these concerns. However, existing definitions for explainable are not forming a solid foundation for this work.MethodsWe critique recent reviews on the literature regarding: the agency of an AI within a team; mental models, especially as they apply to healthcare, and the practical aspects of their elicitation; and existing and current definitions of explainability, especially from the perspective of AI researchers. On the basis of this literature, we create a new definition of explainable, and supporting terms, providing definitions that can be objectively evaluated. Finally, we apply the new definition of explainable to three existing models, demonstrating how it can apply to previous research, and providing guidance for future research on the basis of this definition.ResultsExisting definitions of explanation are premised on global applicability and don’t address the question ‘understandable by whom?’. Eliciting mental models can be likened to creating explainable AI if one considers the AI as a member of a team. On this basis, we define explainability in terms of the context of the model, comprising the purpose, audience, and language of the model and explanation. As examples, this definition is applied to regression models, neural nets, and human mental models in operating-room teams.ConclusionsExisting definitions of explanation have limitations for ensuring that the concerns for practical applications are resolved. Defining explainability in terms of the context of their application forces evaluations to be aligned with the practical goals of the model. Further, it will allow researchers to explicitly distinguish between explanations for technical and lay audiences, allowing different evaluations to be applied to each.

Highlights

  • Wide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare

  • The results are presented in three subsections, forming the justification for the new definition of explainability, and the definition itself

  • We argue that if we allow that an artificial intelligence (AI) can be considered as a team member with agency, mental models, being an accepted framework for explaining thought processes between team members, can be used to resolve the concerns in the definitions of explainability for AI

Read more

Summary

Introduction

Wide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare. Existing definitions for explainable are not forming a solid foundation for this work. The use of such algorithms in making decisions regarding sensitive aspects of our lives raises concerns [1]. Black box models, those where we do not understand their inner workings, are of greatest concern. Those where we do not understand their inner workings, are of greatest concern This has been reflected in the General Data Protection Regulation (GDPR) which. Many definitions of explanations exist [6,7,8,9,10,11,12] but these

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.