Deep-learning methods (especially convolutional neural networks) using structural magnetic resonance imaging (sMRI) data have been successfully applied to computer-aided diagnosis (CAD) of Alzheimer's disease (AD) and its prodromal stage [i.e., mild cognitive impairment (MCI)]. As it is practically challenging to capture local and subtle disease-associated abnormalities directly from the whole-brain sMRI, most of those deep-learning approaches empirically preselect disease-associated sMRI brain regions for model construction. Considering that such isolated selection of potentially informative brain locations might be suboptimal, very few methods have been proposed to perform disease-associated discriminative region localization and disease diagnosis in a unified deep-learning framework. However, those methods based on task-oriented discriminative localization still suffer from two common limitations, that is: 1) identified brain locations are strictly consistent across all subjects, which ignores the unique anatomical characteristics of each brain and 2) only limited local regions/patches are used for model training, which does not fully utilize the global structural information provided by the whole-brain sMRI. In this article, we propose an attention-guided deep-learning framework to extract multilevel discriminative sMRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative brain regions in a weakly supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multilevel sMRI features for CAD model construction. Our proposed method was evaluated on three public datasets (i.e., ADNI-1, ADNI-2, and AIBL), showing superior performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.
Read full abstract