Abstract

Alzheimer's disease (AD) is one of the most common progressive neurodegenerative diseases. Structural magnetic resonance imaging (MRI) would provide abundant information on the anatomical structure of human organs. Fluorodeoxy-glucose positron emission tomography (PET) obtains the metabolic activity of the brain. Previous studies have demonstrated that multi-modality images could contribute to improve diagnosis of AD. However, these methods need to extract the handcrafted features that demand domain specific knowledge and image processing stage is time consuming. In order to tackle these problems, in this study, the authors propose a novel framework that ensembles three state-of-the-art deep convolutional neural networks (DCNNs) with multi-modality images for AD classification. In detail, they extract some slices from each subject of each modality, and every DCNN generates a probabilistic score for the input slices. Furthermore, a `dropout' mechanism is introduced to discard low discrimination slices of the category probabilities. Then average reserved slices of each subject are acquired as a new feature. Finally, they train the Adaboost ensemble classifier based on single decision tree classifier with the MRI and PET probabilistic scores of each DCNN. Evaluations on Alzheimer's Disease Neuroimaging Initiative database show that the proposed algorithm has better performance compared to existing method, the algorithm proposed in this study significantly improved the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call