Abstract

Early detection and treatment can slow the progression of Alzheimer's Disease (AD), one of the most common neurodegenerative diseases. Recent studies have demonstrated the value of multimodal fusion in early AD detection. However, most approaches to this have failed to consider data modality domains, their relationships, and variations in their relative importance. To address these challenges, we propose a Hierarchical Attention-Based Multimodal Fusion framework (HAMF) that utilizes imaging, genetic and clinical data for early AD detection. In the HAMF model, attention mechanisms are utilized to learn the appropriate weights for each modality and to understand the interaction between modalities through hierarchical attention. HAMF performs better than state-of-the-art methods, achieving an accuracy of 87.2% and an AUC of 0.913, which are superior to unimodal models. By comparing the results of different unimodal and multimodal models, we find that multimodal fusion can improve model performance more than unimodal models and clinical data is the most important modality. Our ablation experiment confirmed the effectiveness of HAMF. Finally, we used SHapley Additive exPlanations (SHAP) to improve the model's interpretability. We provide the model as a guide for future research in the field, and as a framework for generating actional advice and decision support system for clinical practitioners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call