Abstract

Inherent to Computed tomography (CT) is image reconstruction, constructing 3D voxel values from noisy projection data. Modeling this inverse operation is not straightforward. Given the ill-posed nature of inverse problem in CT reconstruction, data-driven methods need regularization to enhance the accuracy of the reconstructed images. Besides, generalization of the results hinges upon the availability of large training datasets with access to ground truth. This paper offers a new strategy to reconstruct CT images with the advantage of ground truth accessible through a virtual imaging trial (VIT) platform. A learned primal-dual deep neural network (LPD-DNN) employed the forward model and its adjoint as a surrogate of the imaging's geometry and physics. VIT offered simulated CT projections paired with ground truth labels from anthropomorphic human models without image noise and resolution degradation. The models included a library of anthropomorphic, computational patient models (XCAT). The DukeSim simulator was utilized to form realistic projection data emulating the impact of the physics and geometry of a commercial-equivalent CT scanner. The resultant noisy sinogram data associated with each slice was thus generated for training. Corresponding linear attenuation coefficients of phantoms' materials at the effective energy of the x-ray spectrum were used as the ground truth labels. The LPD-DNN was deployed to learn the complex operators and hyper-parameters in the proximal primal-dual optimization. The obtained validation results showed a 12% normalized root mean square error with respect to the ground truth labels, a peak signal-to-noise ratio of 32 dB, a signal-to-noise ratio of 1.5, and a structural similarity index of 96%. These results were highly favorable compared to standard filtered-back projection reconstruction (65%, 17 dB, 1.0, 26%).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call