Abstract
Sparse-view CT is an effective method to reduce X-ray radiation dose in clinical CT imaging. However, sparse view image reconstruction is still challenging due to highly undersampled data. To this end, we propose a new deep learning-based reconstruction model of CT images, called DIDR-Net. Unlike existing methods, DIDR-Net employs a dual network structure, including an iterative reconstruction sub-network and a detail recovery sub-network. The iterative reconstruction sub-network expands the FISTA (Fast Iterative Soft Thresholding Algorithm) into a deep network, utilizing learnable nonlinear sparse transform and shrinkage thresholding to improve reconstruction performance. To avoid loss of image details while removing artifacts, we design a detail recovery sub-network. Specifically, this sub-network captures the local details and global information of the initial image through local and global branches, and adaptively fuses the results of these two branches through a fusion module. DIDR-Net generates both the initial reconstruction map and the detail feature map in a parallel manner, and finally fuses the two sides to reconstruct a high-quality CT image. Experimental results on the AAPM public dataset show that DIDR-Net exhibits better performance in both streak artifact removal and detail structure preservation compared to other advanced reconstruction algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.