Abstract

Alzheimer’s Disease impairs the memory and cognitive function of patients, and early intervention can effectively mitigate its deterioration. Most existing methods for Alzheimer’s analysis rely solely on medical images, ignoring the impact of some clinical indicators associated with the disease. Furthermore, these methods have thus far failed to identify the specific brain regions affected by the disease. To solve these limitations, we propose an attention-based multi-task Graph Convolutional Network (GNN) for Alzheimer’s disease analysis. Specifically, we first segment brain regions based on tissue types and randomly assign a learnable weight for each region. Then, we introduce multi-task attention units to jointly capture the shared feature information between brain regions and across different tasks, achieving cross-interactions between medical images and clinical indicators. Finally, we design task-specific layers for each task, allowing the model to predict Alzheimer’s Disease status and clinical scores. Experimental results on four Alzheimer’s Disease datasets show that our approach not only outperforms the state-of-the-art in terms of accuracy, but also explicitly identifies brain regions associated with the disease as well as provides reliable clinical scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call