Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.
Read full abstract