Abstract

Interpretability of deep learning (DL) systems is gaining attention in medical imaging to increase experts' trust in the obtained predictions and facilitate their integration in clinical settings. We propose a deep visualization method to generate interpretability of DL classification tasks in medical imaging by means of visual evidence augmentation. The proposed method iteratively unveils abnormalities based on the prediction of a classifier trained only with image-level labels. For each image, initial visual evidence of the prediction is extracted with a given visual attribution technique. This provides localization of abnormalities that are then removed through selective inpainting. We iteratively apply this procedure until the system considers the image as normal. This yields augmented visual evidence, including less discriminative lesions which were not detected at first but should be considered for final diagnosis. We apply the method to grading of two retinal diseases in color fundus images: diabetic retinopathy (DR) and age-related macular degeneration (AMD). We evaluate the generated visual evidence and the performance of weakly-supervised localization of different types of DR and AMD abnormalities, both qualitatively and quantitatively. We show that the augmented visual evidence of the predictions highlights the biomarkers considered by experts for diagnosis and improves the final localization performance. It results in a relative increase of 11.2± 2.0% per image regarding sensitivity averaged at 10 false positives/image on average, when applied to different classification tasks, visual attribution techniques and network architectures. This makes the proposed method a useful tool for exhaustive visual support of DL classifiers in medical imaging.

Highlights

  • D EEP learning (DL) systems in medical imaging have shown to provide high-performing approaches for diverse classification tasks in healthcare, such as screening of eye diseases [1], [2], scoring of prostate cancer [3], or detection of skin cancer [4]

  • We present a quantitative comparison of baseline visual attribution methods for weakly-supervised lesion localization and an extensive evaluation of the proposed method, analyzing its agnosticism when applied to different classification tasks, visual attribution techniques, and network architectures

  • For the alternative classifier based on the Inception-v3 architecture, FcDnnR,iv3, area under the ROC curve (AUC) on the Kaggle test set was 0.93, SE and SP were 0.86 and 0.90, respectively, and κ was 0.80.1 Table II allows to compare the obtained results on the Kaggle test set with those obtained by other entries in the leaderboard of the Kaggle diabetic retinopathy (DR) detection competition [35]

Read more

Summary

Introduction

D EEP learning (DL) systems in medical imaging have shown to provide high-performing approaches for diverse classification tasks in healthcare, such as screening of eye diseases [1], [2], scoring of prostate cancer [3], or detection of skin cancer [4]. Those based on visual attribution have become very popular, such as the ones defined and described in Table I: saliency [18], guided backpropagation [19], integrated gradients [20], Grad-CAM [21], and guided Grad-CAM [21] These attribution methods provide an interpretation of the network’s decision by assigning an attribution value, sometimes called “relevance” or “contribution”, to each input feature of the network depending on its estimated contribution to the network output [22]. This allows to highlight features in the input image that contribute to the output prediction and, the weakly-supervised detection of objects.

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.