Abstract

Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.

Highlights

  • Computer-aided diagnostics (CAD) using artificial intelligence (AI) provides a promising way to make the diagnosis process more efficient and available to the masses

  • The lack of tools to inspect the behavior of black-box models affects the use of deep learning in all domains including finance and autonomous driving where explainability and reliability are the key elements for trust by the end-user

  • This paper describes the studies related to the explainability of deep learning models in the context of medical imaging

Read more

Summary

Introduction

Computer-aided diagnostics (CAD) using artificial intelligence (AI) provides a promising way to make the diagnosis process more efficient and available to the masses. Deep learning is the leading artificial intelligence (AI) method for a wide range of tasks including medical imaging problems. Despite achieving remarkable results in the medical domain, AI-based methods have not achieved a significant deployment in the clinics This is due to the underlying black-box nature of the deep learning algorithms along with other reasons like computational costs. Simpler AI methods like linear regression and decision trees are self-explanatory as the decision boundary used for classification can be visualized in a few dimensions using the model parameters These lack the complexity required for tasks such as classification of 3D and most 2D medical images. It is even more important to show the domain-specific features used in the decision for non-deep learning users like most medical professionals.

Taxonomy of Explainability Approaches
Explainability Methods—Attribution Based
Perturbation Based Methods—Occlusion
Backpropagation Based Methods
Applications
Attribution Based
Brain Imaging
Retinal Imaging
Breast Imaging
CT Imaging
X-ray Imaging
Skin Imaging
Non-Attribution Based
Attention Based
Concept Vectors
Expert Knowledge
Similar Images
Textual Justification
Intrinsic Explainability
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.