Multiplexed PET imaging revolutionized clinical decision-making by simultaneously capturing various radiotracer data in a single scan, enhancing diagnostic accuracy and patient comfort. Through a transformer-based deep learning, this study underscores the potential of advanced imaging techniques to streamline diagnosis and improve patient outcomes. The research cohort consisted of 120 patients spanning from cognitively unimpaired individuals to those with mild cognitive impairment, dementia, and other mental disorders. Patients underwent various imaging assessments, including 3D T1-weighted MRI, amyloid PET scans using either 18F-florbetapir (FBP) or 18F-flutemetamol (FMM), and 18F-FDG PET. Summed images of FMM/FBP and FDG were used as proxy for simultaneous scanning of 2 different tracers. A SwinUNETR model, a convolution-free transformer architecture, was trained for image translation. The model was trained using mean square error loss function and 5-fold cross-validation. Visual evaluation involved assessing image similarity and amyloid status, comparing synthesized images with actual ones. Statistical analysis was conducted to determine the significance of differences. Visual inspection of synthesized images revealed remarkable similarity to reference images across various clinical statuses. The mean centiloid bias for dementia, mild cognitive impairment, and healthy control subjects and for FBP tracers is 15.70 ± 29.78, 0.35 ± 33.68, and 6.52 ± 25.19, respectively, whereas for FMM, it is -6.85 ± 25.02, 4.23 ± 23.78, and 5.71 ± 21.72, respectively. Clinical evaluation by 2 readers further confirmed the model's efficiency, with 97 FBP/FMM and 63 FDG synthesized images (from 120 subjects) found similar to ground truth diagnoses (rank 3), whereas 3 FBP/FMM and 15 FDG synthesized images were considered nonsimilar (rank 1). Promising sensitivity, specificity, and accuracy were achieved in amyloid status assessment based on synthesized images, with an average sensitivity of 95 ± 2.5, specificity of 72.5 ± 12.5, and accuracy of 87.5 ± 2.5. Error distribution analyses provided valuable insights into error levels across brain regions, with most falling between -0.1 and +0.2 SUV ratio. Correlation analyses demonstrated strong associations between actual and synthesized images, particularly for FMM images (FBP: Y = 0.72X + 20.95, R2 = 0.54; FMM: Y = 0.65X + 22.77, R2 = 0.77). This study demonstrated the potential of a novel convolution-free transformer architecture, SwinUNETR, for synthesizing realistic FDG and FBP/FMM images from summation scans mimicking simultaneous dual-tracer imaging.
Read full abstract