Abstract

Background: Acute Bilirubin Encephalopathy (ABE) is a major cause of infant mortality and disability, making early detection and treatment essential to prevent further progression and complications. Methods: To enhance the diagnostic capabilities of multi-modal Magnetic Resonance Imaging (MRI) for ABE, we proposed a deep learning model integrating an attention module (AM) with a central network (CentralNet). This model was tested on MRI data from 145 newborns diagnosed with ABE and 140 non-ABE newborns, utilizing both T1-weighted and T2-weighted images. Results: The findings indicated the following: (1) In single-modality experiments, the inclusion of AM significantly improved all the performance metrics compared to the models without AM. Specifically, for T1-weighted MRI, the accuracy was 0.639 ± 0.04, AUC was 0.682 ± 0.037, and sensitivity was 0.688 ± 0.09. For the T2-weighted images, the accuracy was 0.738 ± 0.039 and the AUC was 0.796 ± 0.025. (2) In multi-modal experiments, using T1 + T2 images, our model achieved the best accuracy of 0.845 ± 0.018, AUC of 0.913 ± 0.02, and sensitivity of 0.954 ± 0.069, compared to models without an AM and CentralNet. The specificity remained relatively stable, while the precision and F1 scores significantly increased, reaching 0.792 ± 0.048 and 0.862 ± 0.017, respectively. Conclusions: This study emphasizes the effectiveness of combining attention modules with CentralNet, significantly enhancing the accuracy of multi-modal MRI in classifying ABE. It presents a new perspective and possibility for the clinical application of multi-modal MRI imaging in the diagnosis of ABE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call