Multimodal learning is widely used in automated early diagnosis of Alzheimer's disease. However, the current studies are based on an assumption that different modalities can provide more complementary information to help classify the samples from the public dataset Alzheimer's Disease Neuroimaging Initiative (ADNI). In addition, the combination of modalities and different tasks are external factors that affect the performance of multimodal learning. Above all, we summrise three main problems in the early diagnosis of Alzheimer's disease: (i) unimodal vs multimodal; (ii) different combinations of modalities; (iii) classification of different tasks. In this paper, to experimentally verify these three problems, a novel and reproducible multi-classification framework for Alzheimer's disease early automatic diagnosis is proposed to evaluate and verify the above issues. The multi-classification framework contains four layers, two types of feature representation methods, and two types of models to verify these three issues. At the same time, our framework is extensible, that is, it is compatible with new modalities generated by new technologies. Following that, a series of experiments based on the ADNI-1 dataset are conducted and some possible explanations for the early diagnosis of Alzheimer's disease are obtained through multimodal learning. Experimental results show that SNP has the highest accuracy rate of 57.09% in the early diagnosis of Alzheimer's disease. In the modality combination, the addition of Single Nucleotide Polymorphism modality improves the multi-modal machine learning performance by 3% to 7%. Furthermore, we analyse and discuss the most related Region of Interest and Single Nucleotide Polymorphism features of different modalities.
Read full abstract