Abstract
Multi-modal fusion has become an important data analysis technology in Alzheimer's disease (AD) diagnosis, which is committed to effectively extract and utilize complementary information among different modalities. However, most of the existing fusion methods focus on pursuing common feature representation by transformation, and ignore discriminative structural information among samples. In addition, most fusion methods use high-order feature extraction, such as deep neural network, by which it is difficult to identify biomarkers. In this paper, we propose a novel method named deep multi-modal discriminative and interpretability network (DMDIN), which aligns samples in a discriminative common space and provides a new approach to identify significant brain regions (ROIs) in AD diagnosis. Specifically, we reconstruct each modality with a hierarchical representation through multilayer perceptron (MLP), and take advantage of the shared self-expression coefficients constrained by diagonal blocks to embed the structural information of inter-class and the intra-class. Further, the generalized canonical correlation analysis (GCCA) is adopted as a correlation constraint to generate a discriminative common space, in which samples of the same category gather while samples of different categories stay away. Finally, in order to enhance the interpretability of the deep learning model, we utilize knowledge distillation to reproduce coordinated representations and capture influence of brain regions in AD classification. Experiments show that the proposed method performs better than several state-of-the-art methods in AD diagnosis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.