Reconstructed PET images are typically noisy, especially in dynamic imaging where the acquired data are divided into several short temporal frames. High noise in the reconstructed images translates to poor precision/reproducibility of image features. One important role of "denoising" is therefore to improve the precision of image features. However, typical denoising methods achieve noise reduction at the expense of accuracy. In this work, we present a novel four-dimensional (4D) denoised image reconstruction framework, which we validate using 4D simulations, experimental phantom, and clinical patient data, to achieve 4D noise reduction while preserving spatiotemporal patterns/minimizing error introduced by denoising. Our proposed 4D denoising operator/kernel is based on HighlY constrained backPRojection (HYPR), which is applied either after each update of OSEM reconstruction of dynamic 4D PET data or within the recently proposed kernelized reconstruction framework inspired by kernel methods in machine learning. Our HYPR4D kernel makes use of the spatiotemporal high frequency features extracted from a 4D composite, generated within the reconstruction, to preserve the spatiotemporal patterns and constrain the 4D noise increment of the image estimate. Results from simulations, experimental phantom, and patient data showed that the HYPR4D kernel with our proposed 4D composite outperformed other denoising methods, such as the standard OSEM with spatial filter, OSEM with 4D filter, and HYPR kernel method with the conventional 3D composite in conjunction with recently proposed High Temporal Resolution kernel (HYPRC3D-HTR), in terms of 4D noise reduction while preserving the spatiotemporal patterns or 4D resolution within the 4D image estimate. Consequently, the error in outcome measures obtained from the HYPR4D method was less dependent on the region size, contrast, and uniformity/functional patterns within the target structures compared to the other methods. For outcome measures that depend on spatiotemporal tracer uptake patterns such as the nondisplaceable Binding Potential (BPND ), the root mean squared error in regional mean of voxel BPND values was reduced from ~8% (OSEM with spatial or 4D filter) to ~3% using HYPRC3D-HTR and was further reduced to ~2% using our proposed HYPR4D method for relatively small target structures (~10mm in diameter). At the voxel level, HYPR4D produced two to four times lower mean absolute error in BPND relative to HYPRC3D-HTR. As compared to conventional methods, our proposed HYPR4D method can produce more robust and accurate image features without requiring any prior information.
Read full abstract