Abstract

Multimodal classification methods using different modalities have great advantages over traditional single-modality-based ones for the diagnosis of Alzheimer's disease (AD) and its prodromal stage mild cognitive impairment (MCI). With the increasing amount of high-dimensional heterogeneous data to be processed, multi-modality feature selection has become a crucial research direction for AD classification. However, traditional methods usually depict the data structure using pre-defined similarity matrix as a priori, which is difficult to precisely measure the intrinsic relationship across different modalities in high-dimensional space. In this paper, we propose a novel multimodal feature selection method called Adaptive-Similarity-based Multi-modality Feature Selection (ASMFS) which performs adaptive similarity learning and feature selection simultaneously. Specifically, a similarity matrix is learned by jointly considering different modalities and at the same time, an efficient feature selection is conducted by imposing group sparsity-inducing l2,1-norm constraint. Evaluated on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database with baseline MRI and FDG-PET imaging data collected from 51 AD, 43 MCI converters (MCI-C), 56 MCI non-converters (MCI-NC) and 52 normal controls (NC), we demonstrate the effectiveness and superiority of our proposed method against other state-of-the-art approaches for multi-modality classification of AD/MCI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.