While multi-modal deep learning approaches trained using magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG PET) data have shown promise in the accurate identification of Alzheimer’s disease, their clinical applicability is hindered by the assumption that both modalities are always available during model inference. In practice, clinicians adjust diagnostic tests based on available information and specific clinical contexts. We propose a novel MRI- and FDG PET-based multi-modal deep learning approach that mimics clinical decision-making by incorporating uncertainty estimates of an MRI-based model (generated using Monte Carlo dropout and evidential deep learning) to determine the necessity of an FDG PET scan, and only inputting the FDG PET to a multi-modal model when required. This approach significantly reduces the reliance on FDG PET scans, which are costly and expose patients to radiation. Our approach reduces the need for FDG PET by up to 92% without compromising model performance, thus optimizing resource use and patient safety. Furthermore, using a global model explanation technique, we provide insights into how anatomical changes in brain regions—such as the entorhinal cortex, amygdala, and ventricles—can positively or negatively influence the need for FDG PET scans in alignment with clinical understanding of Alzheimer’s disease.
Read full abstract