Dopamine transporter imaging is routinely used in Parkinson's disease (PD) and atypical parkinsonian syndromes (APS) diagnosis. While [11C]CFT PET is prevalent in Asia with a large APS database, Europe relies on [123I]FP-CIT SPECT with limited APS data. Our aim was to develop a deep learning-based method to convert [11C]CFT PET images to [123I]FP-CIT SPECT images, facilitating multicenter studies and overcoming data scarcity to promote Artificial Intelligence (AI) advancements. A CycleGAN was trained on [11C]CFT PET (n = 602, 72%PD) and [123I]FP-CIT SPECT (n = 1152, 85%PD) images from PD and non-parkinsonian control (NC) subjects. The model generated synthetic SPECT images from a real PET test set (n = 67, 75%PD). Synthetic images were quantitatively and visually evaluated. Fréchet Inception Distance indicated higher similarity between synthetic and real SPECT than between synthetic SPECT and real PET. A deep learning classification model trained on synthetic SPECT achieved sensitivity of 97.2% and specificity of 90.0% on real SPECT images. Striatal specific binding ratios of synthetic SPECT were not significantly different from real SPECT. The striatal left-right differences and putamen binding ratio were significantly different only in the PD cohort. Real PET and real SPECT had higher contrast-to-noise ratio compared to synthetic SPECT. Visual grading analysis scores showed no significant differences between real and synthetic SPECT, although reduced diagnostic performance on synthetic images was observed. CycleGAN generated synthetic SPECT images visually indistinguishable from real ones and retained disease-specific information, demonstrating the feasibility of translating [11C]CFT PET to [123I]FP-CIT SPECT. This cross-modality synthesis could enhance further AI classification accuracy, supporting the diagnosis of PD and APS.
Read full abstract