Abstract

This work aims to present and evaluate a novel recurrent deep learning model for reduction of the acquisition time in dynamic brain PET imaging without forfeiting clinical information. The clinical dataset included 46 dynamic 18F-DOPA brain PET/CT images used to evaluate a model for generation of complete dynamic PET images from 27% of the total acquisition time. The dataset was split into 35, 6, and 5 for training, validation, and test, respectively. Each dynamic PET scan lasts 90 minutes acquired in list-mode format used to reconstruct 26 dynamic frames). A video prediction deep learning algorithm consisted of two generative adversarial networks and one variational autoencoder was developed and optimized to depict the tracer variation trend from the initial 13 frames (0 to 25 min) and synthesize the last 13 frames (25 to 90 min), respectively. The generated image was analyzed quantitatively by calculating standard metrics, such as the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and time-activity curve (TAC). The PSNR and SSIM varied from 43.24 ± 0.4 to 38.82 ± 0.74 and from 0.98±0.03 to 0.81±0.09 for synthesized frames (14 to 26), respectively. The TAC trend showed that our model is able to predict images with similar tracer distribution compared to reference images. We demonstrated that the proposed method can generate the last 65 min time frames from the initial 25 min frames in dynamic PET imaging, thus reducing the total scanning time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call