Abstract

Alzheimer's disease (AD) is a complex neurodegenerative disorder and the multifaceted nature of it requires innovative approaches that integrate various data modalities to enhance its detection. However, due to the cost of collecting multimodal data, multimodal datasets suffer from an insufficient number of samples. To mitigate the impact of a limited sample size on classification, we introduce a novel deep learning method (One2MFusion) which combines gene expression data with their corresponding 2D representation as a new modality. The gene vectors were first mapped to a discriminative 2D image for training a convolutional neural network (CNN). In parallel, the gene sequences were used to train a feed forward neural network (FNN) and the outputs of the FNN and CNN were merged, and a joint deep network was trained for the binary classification of AD, normal control (NC), and mild cognitive impairment (MCI) samples. The fusion of the gene expression data and gene-originated 2D image increased the accuracy (area under the curve) from 0.86 (obtained using a 2D image) to 0.91 for AD vs. NC and from 0.76 (obtained using a 2D image) to 0.88 for MCI vs. NC. The results show that representing gene expression data in another discriminative form increases the classification accuracy when fused with base data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call