Abstract

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image-to-label result provide insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. To gain local insight of cancerous regions, separate tasks such as imaging segmentation needs to be implemented to aid the doctors in treating patients which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive the AI-first medical solutions further, this paper proposes a multi-output network which follows a U-Net architecture for image segmentation output and features an additional CNN module for auxiliary classification output. Class Activation Maps or CAMs are a method of providing insight into a convolutional neural network’s feature maps that lead to its classification but in the case of lung diseases, the region of interest is enhanced by U-net assisted Class Activation Mapping (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and can generate classification results simultaneously which builds trust for AI-led diagnosis system. The proposed U-Net model achieves 97.72% accuracy and a dice coefficient of 0.9691 on a testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call