Abstract

The integration of structural magnetic resonance imaging (sMRI) and deep learning techniques is one of the important research directions for the automatic diagnosis of Alzheimer's disease (AD). Despite the satisfactory performance achieved by existing voxel-based models based on convolutional neural networks (CNNs), such models only handle AD-related brain atrophy at a single spatial scale and lack spatial localization of abnormal brain regions based on model interpretability. To address the above limitations, we propose a traceable interpretability model for AD recognition based on multi-patch attention (MAD-Former). MAD-Former consists of two parts: recognition and interpretability. In the recognition part, we design a 3D brain feature extraction network to extract local features, followed by constructing a dual-branch attention structure with different patch sizes to achieve global feature extraction, forming a multi-scale spatial feature extraction framework. Meanwhile, we propose an important attention similarity position loss function to assist in model decision-making. The interpretability part proposes a traceable method that can obtain a 3D ROI space through attention-based selection and receptive field tracing. This space encompasses key brain tissues that influence model decisions. Experimental results reveal the significant role of brain tissues such as the Fusiform Gyrus (FuG) in AD recognition. MAD-Former achieves outstanding performance in different tasks on ADNI and OASIS datasets, demonstrating reliable model interpretability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call