Abstract

Alzheimer’s disease (AD) is a degenerative neurological ailment that begins with memory loss and ultimately leads to a total loss of mental capacity. Researchers are interested in using magnetic resonance imaging (MRI) and positron emission tomography (PET) to find people with mild cognitive impairment (MCI), which is a stage before Alzheimer’s disease (AD). Significant hippocampal loss and temporal lobe atrophy characterize the transition from MCI to AD, which can be visualized using T1-W structural MRI. PET visualizes brain glucose metabolism, which indicates neuronal activity, making it a viable neuroimaging method for AD diagnosis. The extraction and fusion of structural and metabolite information about brain alterations contained in multimodal data is crucial for achieving an appropriate classification result. Therefore, in this work a new end-to-end coupled-GAN (CGAN) architecture is introduced. The proposed CGANC network consists of two sub-models: a CGAN for extraction of fused features from multimodal data, and a CNN classifier to classify these features. The proposed CGAN model is trained to encode MRI and PET images into a shared latent space. The fused features are extracted from this shared latent space and then are classified according to particular stage of AD. In order to test the effectiveness of the suggested approach, experiments are done on the publicly available ADNI dataset and compared with state-of-the-art methods. The proposed method’s source code will be made freely available at https://github.com/ChandrajitChoudhury/CGAN-AD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call