Abstract
Accurate photon attenuation correction (AC) is essential for quantitative PET image reconstruction. MR-based AC of combined PET/MR systems is challenging because MR images do not directly correspond to attenuation coefficients, as is the case for CT images. Deep learning (DL)-based methods of PET AC have shown promise as alternative solutions. In this work, we evaluate the accuracy of using a DL model for the purpose of MR-based AC on reconstructed head and neck PET images. We use a conditional generative adversarial network (cGAN) known as pix2pix to translate MR input images into pseudo-CT images. The pseudo-CT images are converted to attenuation maps (µ-maps) at 511keV through a bilinear conversion and used in reconstruction of PET images acquired on a Signa PET/MR system (GE Healthcare) to correct for photon attenuation. We compared the reconstructed PET images of one patient using our AC method (pix2pixAC) to existing GE Signa PET/MR atlas-based (AtlasAC) and ZTE-based segmentation (ZTEAC) methods of AC. The reconstructed AC PET images obtained using pix2pixAC are comparable to AtlasAC and ZTEAC with a mean absolute error (MAE) in standardized uptake value (SUV) of 0.24 (2.67%) and 0.19 (2.10%), respectively, for the entire head and neck region. We conclude that our method produces similar AC PET images to clinical methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have