Abstract
Low-dose positron emission tomography (LD-PET) imaging is commonly employed in preclinical research to minimize radiation exposure to animal subjects. However, LD-PET images often exhibit poor quality and high noise levels due to the low signal-to-noise ratio. Deep learning (DL) techniques such as generative adversarial networks (GANs) and convolutional neural network (CNN) have the capability to enhance the quality of images derived from noisy or low-quality PET data, which encodes critical information about radioactivity distribution in the body. Our objective was to optimize the image quality and reduce noise in preclinical PET images by utilizing the sinogram domain as input for DL models, resulting in improved image quality as compared to LD-PET images. A GAN and CNN model were utilized to predict high-dose (HD) preclinical PET sinograms from the corresponding LD preclinical PET sinograms. In order to generate the datasets, experiments were conducted on micro-phantoms, animal subjects (rats), and virtual simulations. The quality of DL-generated images was weighted by performing the following quantitative measures: structural similarity index measure (SSIM), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Additionally, DL input and output were both subjected to a spatial resolution calculation of full width half maximum (FWHM) and full width tenth maximum (FWTM). DL outcomes were then compared with the conventional denoising algorithms such as non-local means (NLM), block-matching, and 3D filtering (BM3D). The DL models effectively learned image features and produced high-quality images, as reflected in the quantitative metrics. Notably, the FWHM and FWTM values of DL PET images exhibited significantly improved accuracy compared to LD, NLM, and BM3D PET images, and just as precise as HD PET images. The MSE loss underscored the excellent performance of the models, indicating that the models performed well. To further improve the training, the generator loss (G loss) was increased to a value higher than the discriminator loss (D loss), thereby achieving convergence in the GAN model. The sinograms generated by the GAN network closely resembled real HD preclinical PET sinograms and were more realistic than LD. There was a noticeable improvement in image quality and noise factor in the predicted HD images. Importantly, DL networks did not fully compromise the spatial resolution of the images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.