Abstract
Alzheimer's disease (AD) is a global neurodegenerative disorder that affects millions of individuals worldwide. Actual AD imaging datasets challenge the construction of reliable longitudinal models owing to imaging modality uncertainty. In addition, they are still unable to retain or obtain important information during disease progression from previous to followup time points. For example, the output values of current gates in recurrent models should be close to a specific value that indicates the model is uncertain about retaining or forgetting information. In this study, we propose a model which can extract and constrain each modality into a common representation space to capture intermodality interactions among different modalities associated with modality uncertainty to predict AD progression. In addition, we provide an auxiliary function to enhance the ability of recurrent gate robustly and effectively in controlling the flow of information over time using longitudinal data. We conducted comparative analysis on data from the Alzheimer's Disease Neuroimaging Initiative database. Our model outperformed other methods across all evaluation metrics. Therefore, the proposed model provides a promising solution for addressing modality uncertainty challenges in multimodal longitudinal AD progression prediction.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have