Abstract

Sparse-view computed tomography (CT) is recently proposed as a promising method to speed up data acquisition and alleviate the issue of CT high dose delivery to the patients. However, traditional reconstruction algorithms are time-consuming and suffer from image degradation when faced with sparse-view data. To address this problem, we propose a new framework based on deep learning (DL) that can quickly produce high-quality CT images from sparsely sampled projections and is able for clinical use. Our DL-based proposed model is based on the convolution, and residual neural networks in a parallel manner, named the parallel residual neural network (PARS-Net). Besides, our proposed PARS-Net model benefits from a loss based on the geodesic distance to effectively reflect image structures. Experiments have been performed on the combination of two large-scale CT datasets consisting of CT images of whole-body patients for different sparse projection views including 120, 60, and 30 views. Our experimental results show that PARS-Net is 4–5 times faster than the state-of-the-art DL-based models, with fewer memory requirements, better performance in other objective quality evaluations, and improved visual quality. Results showed that our PARS-Net model was superior to the latest methods, demonstrating the feasibility of using this model for high-quality CT image reconstruction from sparsely sampled projections.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.