Abstract
Structural magnetic resonance imaging (sMRI) is commonly used for the identification of Alzheimer’s disease because of its keen insight into atrophy-induced changes in brain structure. Current mainstream convolutional neural network-based deep learning methods ignore the long-term dependencies between voxels; thus, it is challenging to learn the global features of sMRI data. In this study, an advanced deep learning architecture called Brain Informer (BraInf) was developed based on an efficient self-attention mechanism. The proposed model integrates representation learning, feature distilling, and classifier modeling into a unified framework. First, the proposed model uses a multihead ProbSparse self-attention block for representation learning. This self-attention mechanism selects the first ⌊lnN⌋ elements that can represent the overall features from the perspective of probability sparsity, which significantly reduces computational cost. Subsequently, a structural distilling block is proposed that applies the concept of patch merging to the distilling operation. The block reduces the size of the three-dimensional tensor and further lowers the memory cost while preserving the original data as much as possible. Thus, there was a significant improvement in the space complexity. Finally, the feature vector was projected into the classification target space for disease prediction. The effectiveness of the proposed model was validated using the Alzheimer’s Disease Neuroimaging Initiative dataset. The model achieved 97.97% and 91.89% accuracy on Alzheimer’s disease and mild cognitive impairment classification tasks, respectively. The experimental results also demonstrate that the proposed framework outperforms several state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.