Abstract

Fetal ultrasound images are widely used for visualizing fetal development during pregnancy. These ultrasound image planes provide information about the anatomy of the fetus, thus helping healthcare professionals identify any abnormalities. Several AI tools are now being applied to classify fetal planes automatically. Accurate classification of fetal ultrasound image planes is crucial for the correct prenatal diagnosis and healthcare. However, while deep learning models have shown promise in image classifications, their ”black box” nature makes their decisions challenging to interpret, which is a significant concern in healthcare analytics. This paper addresses the problem of interpretability of the decisions made by a Convolutional Neural Network(CNN) for fetal ultrasound image classification using an XAI technique. Despite their accuracy, the established solutions lack the transparency essential for medical professionals to trust the model's predictions. The LIME(Local Interpretable Model-agnostic Explanations) is applied to interpret the CNN classification that provides a high classification accuracy. The results of LIME interpretability model on the top of CNN highlight the critical regions that positively and negatively contribute to the classification decision. This approach offers a transparent and trusted solution for leveraging AI in prenatal diagnostics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call