Abstract

Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call