Abstract

The computer-aided diagnosis contributes to the early detection of mild cognitive impairment (MCI). Though many deep learning methods have achieved favorable performance on Alzheimer’s disease (AD) diagnosis tasks, the single-modality method will lead to lopsidedness in modeling disease features. Secondly, existing patch-based methods usually ignore the spatial information between local image patches when modeling the global feature representation of the brain. In addition, existing patch-based methods rely on anatomical landmark detection algorithms to pre-determine informative locations in the brain, which would result in the localization of brain atrophy locations that not only require extensive expert experience but also misses some potential lesion areas. In this paper, we propose a patch-based deep multi-modal learning (PDMML) framework for brain disease diagnosis. Specifically, we design a discriminative location discovery strategy to filter normal regions without prior knowledge. Multimodal imaging features were integrated at the patch level to capture multi-view brain disease representations. The local patches are further jointly learned to prevent the loss of spatial information caused by the direct flattening of the patches. Experimental results on 842 subjects from the ADNI dataset demonstrate that our proposed method excels in discriminative location discovery and brain disease diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call