Abstract

Model based iterative reconstruction (MBIR) algorithms have shown significant improvement in CT image quality by increasing resolution as well as reducing noise and artifacts. In diagnostic protocols, radiologists often need the high-resolution reconstruction of a limited region of interest (ROI). This ROI reconstruction is complicated for MBIR which should reconstruct an image in a full field of view (FOV) given full sinogram measurements. Multi-resolution approaches are widely used for this ROI reconstruction of MBIR, in which the image with a full FOV is reconstructed in a low-resolution and the forward projection of non-ROI is subtracted from the original sinogram measurements for high-resolution ROI reconstruction. However, a low-resolution reconstruction of a full FOV can be susceptible to streaking and blurring artifacts and these can be propagated into the following high-resolution ROI reconstruction. To tackle this challenge, we use a coupled dictionary representation model between low- and high-resolution training dataset for artifact removal and super resolution of a low-resolution full FOV reconstruction. Experimental results on phantom data show that the restored full FOV reconstruction via a coupled dictionary learning significantly improve the image quality of high-resolution ROI reconstruction for MBIR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call