Abstract

Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.

Highlights

  • Medical imaging plays a key role in modern medicine as it allows for the non-invasive visualization of internal structures and metabolic processes of the human body in detail

  • Explainable artificial intelligence (XAI) refers to AI solutions that can provide some details about their functioning in a way that is understandable to the end-users [9]

  • As the field is still in its infancy, there is no general consensus on the definition of the terms interpretability and explainability in this context, and recently various definitions have been proposed for these terms [9, 108, 86]

Read more

Summary

Introduction

Medical imaging plays a key role in modern medicine as it allows for the non-invasive visualization of internal structures and metabolic processes of the human body in detail This aids in disease diagnostics, treatment planning and treatment follow-up by adding potentially informative data in the form of patient-specific disease characteristics [72, 2]. The amount of healthcare imaging data is rapidly increasing due to advances in hardware, the increase in population, decrease in cost, and the awareness of the utility of the imaging modalities [131] This adds to the increasing difficulty for radiologists and clinicians to cope with the mounting burden of analyzing the large amounts of available data from disparate data sources, and studies have highlighted sometimes considerable inter-observer variability when performing various clinical imaging tasks [110]. Deep learning has demonstrated state-of-the-art performance on many medical imaging challenges related to classification [15, 3], segmentation [171, 156, 17] and other tasks [111]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call