Historically, patient datasets have been used to develop and validate various reconstruction algorithms for PET/MRI and PET/CT. To enable such algorithm development, without the need for acquiring hundreds of patient exams, in this article we demonstrate a deep learning technique to generate synthetic but realistic whole-body PET sinograms from abundantly available whole-body MRI. Specifically, we use a dataset of 56 18F-FDG-PET/MRI exams to train a 3-D residual UNet to predict physiologic PET uptake from whole-body T1-weighted MRI. In training, we implemented a balanced loss function to generate realistic uptake across a large dynamic range and computed losses along tomographic lines of response to mimic the PET acquisition. The predicted PET images are forward projected to produce synthetic PET (sPET) time-of-flight (ToF) sinograms that can be used with vendor-provided PET reconstruction algorithms, including using CT-based attenuation correction (CTAC) and MR-based attenuation correction (MRAC). The resulting synthetic data recapitulates physiologic 18F-FDG uptake, e.g., high uptake localized to the brain and bladder, as well as uptake in liver, kidneys, heart, and muscle. To simulate abnormalities with high uptake, we also insert synthetic lesions. We demonstrate that this sPET data can be used interchangeably with real PET data for the PET quantification task of comparing CTAC and MRAC methods, achieving ≤ 7.6% error in mean-SUV compared to using real data. These results together show that the proposed sPET data pipeline can be reasonably used for development, evaluation, and validation of PET/MRI reconstruction methods.
Read full abstract