Abstract

Time-of-flight (TOF) PET technology demonstrated superior image quality and quantitative performance translated into a considerable increase in SNR-gain, noise reduction, and robustness to artefacts, thus improving confidence in clinical diagnosis. This work aimed to assess the performance of TOF PET synthesis from non-TOF PET images using deep learning techniques. One hundred forty <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">18</sup> F-FDG brains PET/CT clinical studies were acquired in list-mode format enabling the generation of non-TOF and TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). An iterative algorithm was used to reconstruct images corresponding to non-TOF and TOF sinograms. A modified cycle-consistent generative adversarial network (CycleGAN) was implemented to predict TOF images from non-TOF images in both sinogram and image domains. In the first approach, a model was trained to predict TOF from non-TOF images, whereas in the second approach, 7 models were trained to synthesize 7 time bin sinograms from the non-TOF sinograms. Quantitative analysis revealed improvement peak signal-to-noise ratio (PSNR) by 9% and 12% in the synthesized TOF images compared to the corresponding non-TOF images in the sinogram and image domains, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call