Abstract

BackgroundCompared with single-modal neuroimages classification of AD, multi-modal classification can achieve better performance by fusing different information. Exploring synergy among various multi-modal neuroimages is contributed to identifying the pathological process of neurological disorders. However, it is still problematic to effectively exploit multi-modal information since the lack of an effective fusion method. New methodIn this paper, we propose a deep multi-modal fusion network based on the attention mechanism, which can selectively extract features from MRI and PET branches and suppress irrelevant information. In the attention model, the fusion ratio of each modality is assigned automatically according to the importance of the data. A hierarchical fusion method is adopted to ensure the effectiveness of Multi-modal Fusion. ResultsEvaluating the model on the ADNI dataset, the experimental results show that it outperforms the state-of-the-art methods. In particular, the final classification results of the NC/AD, SMCI/PMCI and Four-Class are 95.21 %, 89.79 %, and 86.15 %, respectively. Comparison with existing methods: Different from the early fusion and the late fusion, the hierarchical fusion method contributes to learning the synergy between the multi-modal data. Compared with some other prominent algorithms, the attention model enables our network to focus on the regions of interest and effectively fuse the multi-modal data. ConclusionBenefit from the hierarchical structure with attention model, the proposed network is capable of exploiting low-level and high-level features extracted from the multi-modal data and improving the accuracy of AD diagnosis. Results show its promising performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call