Abstract

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.

Highlights

  • In conventional diagnostics, possible lesions in captured images are checked manually by a doctor in a medical setting

  • Our findings suggest that there are notable differences in human decision-making between various explanation support settings

  • Users with Contextual Importance and Utility (CIU) explanation support were significantly better at recognizing incorrect explanations than those given Local Interpretable Model-agnostic Explanations (LIME) explanations and, to some extent, better than those provided with SHapley Additive exPlanations (SHAP) explanation support

Read more

Summary

Introduction

Possible lesions in captured images are checked manually by a doctor in a medical setting. In recent years, deep learning and AI-based extraction of information from images have received growing interest in fields such as medical diagnostics, finance, forensics, scientific research and education. In these domains, it is often necessary to understand the reason for the model’s decisions so that the human can validate the decision’s outcome [1]. Well-trained machine learning systems have the ability to generate accurate predictions regarding various anomalies and can be used as effective clinical practice tools Their core mathematical concepts can be understood, they lack an explicit declarative information representation and have difficulty producing the underlying explanatory structures [3]. The aim of the last two hypotheses is to evaluate whether human users are able to detect errors in the explanations provided in 5 out of 12 test cases in the last part of the test phase and to assess how the ability to recognize correct or incorrect explanations differs among the three user groups.

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call