Abstract

Low dose and sparse view CT are effective approaches to reduce the radiation dose and accelerate scan speed. Images reconstructed from insufficient data acquired from low dose and sparse view CT are associated with severe streaking artifacts. Therefore, reducing the radiation dose will further degrade the imaging quality. Several attempts have been made to remove these artifacts using deep learning methods such as CNN. Although the deep learning methods for low dose and sparse view CT reconstruction have gained impressive successes, the reconstruction results are still over-smooth. In this work, we propose an artifacts reduction method for low dose and sparse-view CT via a single model trained by generative adversarial networks (GAN). Several numerical simulation experiments are implemented to test the performance of our network. The results show that our GAN can significantly reduce the streaking atrifacts campared with FBP mehtod, and carries more detailed information than CNN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.