Abstract

This paper discusses explainable deep learning approaches for medical imaging applications. This includes evaluation of three different feature visualization approaches for visualizing deep neural network models, developed with medical domain images. The idea behind using these approaches is to unbox the internals of deep neural network, specifically convolutional neural networks (CNN) by providing a step by step learning of these models. These approaches could help clinicians to have confidence in complex deep learning models, as opposed to treating those as black boxes as these approaches can generate a meaningful view of the model layers and their feature maps. Also, these approaches can be adopted by data scientists to improve their model performances by looking at the model behavior in each layer. We have implemented three approaches namely Activation Map, Deconvolution and Grad-CAM localization in Keras with Tensorflow background and have validated the results with CNN models developed with natural images. We have also been able to generate these visualizations for models with even filter sizes with activation map and deconvolution approaches. These methods play crucial role in getting approval from regulatory authorities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.