Abstract

Alzheimer's disease (AD) is a progressive and irreversible brain degenerative disorder which often happens in people aged more than 65 years old. Accurate and early diagnosis of AD is vital for the patient care and development of future treatment. Positrons Emission Tomography (PET) is a functional molecular imaging modality, which proves to be a powerful tool to help understand the brain changes related to AD. Most existing methods extract the handcraft features from images, and then train a classifier to distinguish AD from other groups. The success of these computer-aided diagnosis methods highly depends on the image preprocessing, including rigid registration and segmentation. Motivated by the success of deep learning in image classification, this paper proposes a new classification framework based on combination of 2D convolutional neural networks (CNN) and recurrent neural networks (RNN), which learns the features of 3D PET images by decomposing the 3D image into a sequence of 2D slices. In this framework, the hierarchical 2D CNNs are built to capture the intra-slice features while the gated recurrent unit (GRU) of RNN is used to extract the inter-slice features for final classification. No rigid image registration and segmentation are required for PET images. Our method is evaluated on the baseline PET images from 339 subjects including 93 AD patients, 146 mild cognitive impairments (MCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an AUC of 95.28% for classification of AD vs. NC and 83.90% for classification of MCI vs. NC, respectively, demonstrating the promising classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call