Abstract

ABSTRACT Early diagnosis systems have vital importance to monitor and follow-up the conditions of neonates. Thermal imaging as a non-invasive and non-contact method has been used to monitor neonates for over decades. In this study, we train a convolutional neural network (CNN) model that classifies medical thermograms as healthy and unhealthy using real neonatal thermal images captured within a year from the Neonatal Intensive Care Unit (NICU), Faculty of Medicine at Selcuk University, Turkey. The trained model achieved 99.91% accuracy for train, 99.47% accuracy for validation, and 99.82% accuracy for test data. The test data were never used during training. Although the trained model achieves over 99% accuracy, how it works was not known because of the CNNs’ ”Black-Box” nature. The four visual Explainable Artificial Intelligence methods that are GradCAM, GradCAM++, LayerCAM, and EigenCAM and a new ensemble visual explanation method named CodCAM are used to visualise the important parts of the neonatal thermal images for classification. Therefore, medical specialists are going to know which regions of the thermograms (i.e. parts of the neonates) affect the trained CNN’s decision so as to build trust in AI models and evaluate the results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.