Abstract

Alzheimer’s disease (AD) is a central nervous system disease that mainly appears in the aged. Early diagnosis of AD is valuable in delaying the progression of the disease. With the development of medical imaging technology, various medical images such as structural Magnetic Resonance Imaging (sMRI) and Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) can obtain the structural and functional lesions of the brain to assist in diagnosing diseases. However, FDG-PET is usually incomplete due to radiation and high costs. Most existing methods exclude missing modal subjects, which is remarkably one-sided. Meanwhile, how to extract the features of different levels of multimodal fusion is still a challenge. To solve these issues, we propose a Consistent Manifold Projection Generative Adversarial Network (CMPGAN) for FDG-PET generation and a Multilevel Multimodal Fusion Diagnosis Network (MMFDN) for diagnosing AD. First, we propose a CMPGAN model to project the distribution onto low-dimensional manifolds through consistent manifold projection, and present a distribution distance metric to optimize the model. Our proposed model can avoid problems of mode collapse and gradient disappearance. Then, we construct a multiscale feature-level feature extraction network based on our proposed radial medley unit and a voxel-level feature extraction network based on a harmonic voxel fusion matrix. The fusion of the two parts obtains the final diagnosis result. Experimental results indicate that our proposed method performs better than state-of-the-art methods in FDG-PET generation and AD diagnosis. Our approach also has the significance of guiding clinicians in diagnosing diseases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call