Abstract

Magnetic resonance imaging (MRI) has been widely used in assessing development of Alzheimer’s disease (AD) by providing structural information of disease-associated regions (e.g. atrophic regions). In this paper, we propose a light-weight cross-view hierarchical fusion network (CvHF-net), consisting of local patch and global subject subnets, for joint localization and identification of the discriminative local patches and regions in the whole brain MRI, upon which feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Firstly, based on the extracted class-discriminative 3D patches, we employ the local patch subnets to utilize multiple 2D views to represent 3D patches by using an attention-aware hierarchical fusion structure in a divide-and-conquer manner. Since different local patches are with various abilities in AD identification, the global subject subnet is developed to bias the allocation of available resources towards the most informative parts among these local patches to obtain global information for AD identification. Besides, an instance declined pruning algorithm is embedded in the CvHF-net for adaptively selecting most discriminant patches in a task-driven manner. The proposed method was evaluated on the AD Neuroimaging Initiative dataset and the experimental results show that our proposed method can achieve good performance on AD diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call