Abstract

With the advance of medical imaging technologies, multimodal images such as magnetic resonance images (MRI) and positron emission tomography (PET) can capture subtle structural and functional changes of brain, facilitating the diagnosis of brain diseases such as Alzheimer's disease (AD). In practice, multimodal images may be incomplete since PET is often missing due to high financial costs or availability. Most of the existing methods simply excluded subjects with missing data, which unfortunately reduced the sample size. In addition, how to extract and combine multimodal features is still challenging. To address these problems, we propose a deep learning framework to integrate a task-induced pyramid and attention generative adversarial network (TPA-GAN) with a pathwise transfer dense convolution network (PT-DCN) for imputation and classification of multimodal brain images. First, we propose a TPA-GAN to integrate pyramid convolution and attention module as well as disease classification task into GAN for generating the missing PET data with their MRI. Then, with the imputed multimodal images, we build a dense convolution network with pathwise transfer blocks to gradually learn and combine multimodal features for final disease classification. Experiments are performed on ADNI-1/2 datasets to evaluate our method, achieving superior performance in image imputation and brain disease diagnosis compared to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call