Abstract

Application of kinetic modeling (KM) on a voxel level in dynamic PET images frequently suffers from high levels of noise, drastically reducing the precision of parametric image analysis. In this paper, we investigate the use of machine learning and artificial neural networks to denoise dynamic PET images. We train a deep denoising autoencoder (DAE) using noisy and noise-free spatiotemporal image patches, extracted from the simulated images of [11C]raclopride, a dopamine D2 receptor agonist. The DAE-processed dynamic and corresponding parametric images (simulated and acquired) are compared with those obtained with conventional denoising techniques, including temporal and spatial Gaussian smoothing, iterative spatiotemporal smoothing/deconvolution, and the highly constrained backprojection processing (HYPR). The simulated (acquired) parametric image non-uniformity was 7.75% (19.49%) with temporal and 5.90% (14.50%) with spatial smoothing, 5.82% (16.21%) with smoothing/deconvolution, 5.49% (13.38%) with HYPR, and 3.52% (11.41%) with DAE. The DAE also produced the best results in terms of the coefficient of variation of voxel values and structural similarity index. Denoising-induced bias in the regional mean binding potential was 7.8% with temporal and 26.31% with spatial smoothing, 28.61% with smoothing/deconvolution, 27.63% with HYPR, and 14.8% with DAE. When the test data did not match the training data, erroneous outcomes were obtained. Our results demonstrate that a deep DAE can provide a substantial reduction in the voxel-level noise compared with the conventional spatiotemporal denoising methods while introducing a similar or lower amount of bias. The better DAE performance comes at the cost of lower generality and requiring appropriate training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call