Abstract

AbstractBackgroundPositron emission tomography (PET) imaging with amyloid‐specific tracers is the gold standard for assessing amyloid positivity in vivo. However, visual interpretation of PET images is limited by availability of clinicians and interobserver variability. Classification of amyloid status typically requires calculation of standardized uptake value ratios (SUVRs), correspondence with neuroanatomy from magnetic resonance imaging, and additional complex, computationally intensive preprocessing steps requiring specialized software. We developed a deep learning model that can automatically classify unprocessed PET images as amyloid positive or negative.MethodFlorbetapir (AV45), florbetaben (FBB), and Pittsburgh compound B (PiB) images from the Alzheimer’s Disease Neuroimaging Association (ADNI), AV45 and PiB images from the Open Access Series of Imaging Studies 3 (OASIS3), and FBB, flutametamol (FMT), and flutafuranol (NAV4694) images from the Australian Imaging Biomarkers and Lifestyle Study of Ageing (AIBL) were used (Table 1). Each image was resized to isotropic 2 mm voxels, and voxel intensities between the 5th and 95th percentile were normalized to zero mean and unit variance. We trained and internally validated a convolutional neural network (CNN) on AV45 images from ADNI before testing it on images from an independent cohort and/or acquired with a different tracer.ResultThe model achieved an AUC of 0.98 [0.97, 1.0] on test AV45 images from ADNI and generalized well to AV45 images from OASIS3, achieving an AUC of 0.95 [0.92, 0.97] (Table 2). Critically, the model’s performance remained robust even when evaluating imaging from different tracers, including FBB, FMT, PiB, and NAV4694, scoring an AUC of at least 0.97 on each and demonstrating its ability to learn a tracer‐agnostic pattern of abnormality. Similar conclusions can be drawn from other performance metrics.ConclusionWe developed a deep learning model that can classify amyloid PET scans with a high degree of accuracy. We demonstrated that the model generalizes well to unseen images acquired from an independent cohort, under a different protocol, and/or with tracers unseen during training. Our model may usefully provide concise metrics for amyloid research as well as support clinical assessment of amyloid‐specific PET when expertise or computational resources are limited.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call