Abstract

BackgroundExplainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.MethodsTaking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.ResultsEach of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.ConclusionsTo ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.

Highlights

  • Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare

  • What explainability methods are and, second, where they are applied in medical AI development

  • There is a trade-off between performance and explainability, and this trade-off is a big challenge for the developers of clinical decision support systems

Read more

Summary

Introduction

Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. It comes as no surprise that scholars predict a grim future for the sustainability of healthcare systems throughout the world. AI often comes in the form of clinical decision support systems (CDSS), assisting clinicians in diagnosis of disease and treatment decisions. Technological progress always goes hand in hand with novel questions and significant challenges. Some of these challenges are tied to the technical properties of AI, others relate to the legal, medical, and patient perspectives, making it necessary to adopt a multidisciplinary perspective

Objectives
Methods
Results

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.