Abstract

Structural magnetic resonance imaging (sMRI) is commonly used for the identification of Alzheimer’s disease because of its keen insight into atrophy-induced changes in brain structure. Current mainstream convolutional neural network-based deep learning methods ignore the long-term dependencies between voxels; thus, it is challenging to learn the global features of sMRI data. In this study, an advanced deep learning architecture called Brain Informer (BraInf) was developed based on an efficient self-attention mechanism. The proposed model integrates representation learning, feature distilling, and classifier modeling into a unified framework. First, the proposed model uses a multihead ProbSparse self-attention block for representation learning. This self-attention mechanism selects the first ⌊lnN⌋ elements that can represent the overall features from the perspective of probability sparsity, which significantly reduces computational cost. Subsequently, a structural distilling block is proposed that applies the concept of patch merging to the distilling operation. The block reduces the size of the three-dimensional tensor and further lowers the memory cost while preserving the original data as much as possible. Thus, there was a significant improvement in the space complexity. Finally, the feature vector was projected into the classification target space for disease prediction. The effectiveness of the proposed model was validated using the Alzheimer’s Disease Neuroimaging Initiative dataset. The model achieved 97.97% and 91.89% accuracy on Alzheimer’s disease and mild cognitive impairment classification tasks, respectively. The experimental results also demonstrate that the proposed framework outperforms several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call