Abstract

Dual panel PET systems, such as Breast-PET (B-PET) scanner, exhibit strong asymmetric and anisotropic spatially-variant deformations in the reconstructed images due to the limited-angle data and strong depth of interaction effects for the oblique LORs inherent in such systems. In our previous work, we studied time-of-flight (TOF) effects and image-based spatially-variant PSF resolution models within dual-panel PET reconstruction to reduce these deformations. The application of PSF based models led to better and more uniform quantification of small lesions across the field of view (FOV). However, the ability of such a model to correct for PSF deformation is limited to small objects. On the other hand, large object deformations caused by the limited-angle reconstruction cannot be corrected with the PSF modeling alone. In this work, we investigate the ability of deep-learning (DL) networks to recover such strong spatially-variant image deformations using first simulated PSF deformations in image space of a generic dual panel PET system and then using simulated and acquired phantom reconstructions from dual panel B-PET system developed in our lab at University of Pennsylvania. For the studies using real B-PET data, the network was trained on the simulated synthetic data sets providing ground truth for objects resembling experimentally acquired phantoms on which the network deformation corrections were then tested. The synthetic and acquired limited-angle B-PET data were reconstructed using DIRECT-RAMLA reconstructions, which were then used as the network inputs. Our results demonstrate that DL approaches can significantly eliminate deformations of limited angle systems and improve their quantitative performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call