Computer vision researchers and decision-makers have struggled to understand how deep neural networks (DNNs) accomplish image classification tasks and interpret their results. Due to a lack of understanding of their internal functioning, these models are commonly referred to as "black boxes." As a part of the development process, the DNNs can be easily explained. In this research work, we introduce an explainable technique for shoulder abnormality detection. The motivation behind this study is to enhance patients' and medical professionals' trust in DNNs technology. DNNs are implemented frequently in the medical domain. The suggested abnormality detector, which is based on IGrad-CAM++, is capable of detecting Shoulder X-rays abnormality. Grad-CAM is a common approach that combines the activation maps received from the model to create such a visualization. The average gradient-based terms used in this technique, on the other hand, understate the contribution of the model's identified representations to its predictions. In order to address this issue, we offer a technique that uses Grad-CAM++ to compute the route integral of the gradient-based terms. By assessing different techniques, it is discovered that the recommended procedure can perform very effectively and efficiently in X-ray images provides more visual explanation than existing techniques.
Read full abstract