Abstract

Accurately diagnosing of Alzheimer's disease (AD) and its early stages is critical for prompt treatment or potential intervention to delay the the disease’s progression. Convolutional neural networks (CNNs) models have shown promising results in structural MRI (sMRI)-based diagnosis, but their performance, particularly for 3D models, is constrained by the lack of labeled training samples. To address the overfitting problem brought on by the insufficient training sample size, we propose a three-round learning strategy that combines transfer learning with generative adversarial learning. In the first round, a 3D Deep Convolutional Generative Adversarial Networks (DCGAN) model was trained with all available sMRI data to learn the common feature of sMRI through unsupervised generative adversarial learning. The second round involved transferring and fine-tuning, and the pre-trained discriminator (D) of the DCGAN learned more specific features for the classification task between AD and cognitively normal (CN). In the final round, the weights learned in the AD versus CN classification task were transferred to the MCI diagnosis. By highlighting brain regions with high prediction weights using 3D Grad-CAM, we further enhanced the model's interpretability. The proposed model achieved accuracies of 92.8%, 78.1%, and 76.4% in the classifications of AD versus CN, AD versus MCI, and MCI versus CN, respectively. The experimental results show that our proposed model avoids overfitting brought on by a paucity of sMRI data and enables the early detection of AD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call