Abstract

Sparse-view computed tomography (CT) has attracted a lot of attention for reducing both scanning time and radiation dose. However, sparsely-sampled projection data generate severe streak artifacts in the reconstructed images. In recent decades, many sparse-view CT reconstruction techniques based on fully-supervised learning have been proposed and have shown promising results. However, it is not feasible to acquire pairs of full-view and sparse-view CT images in real clinical practice. In this study, we propose a novel self-supervised convolutional neural network (CNN) method to reduce streak artifacts in sparse-view CTimages. We generate the training dataset using only sparse-view CT data and train CNN based on self-supervised learning. Since the streak artifacts can be estimated using prior images under the same CT geometry system, we acquire prior images by iteratively applying the trained network to given sparse-view CT images. We then subtract the estimated steak artifacts from given sparse-view CT images to produce the finalresults. We validated the imaging performance of the proposed method using extended cardiac-torso (XCAT) and the 2016 AAPM Low-Dose CT Grand Challenge dataset from Mayo Clinic. From the results of visual inspection and modulation transfer function (MTF), the proposed method preserved the anatomical structures effectively and showed higher image resolution compared to the various streak artifacts reduction methods for all projection views. We propose a new framework for streak artifacts reduction when only the sparse-view CT data are given. Although we do not use any information of full-view CT data for CNN training, the proposed method achieved the highest performance in preserving fine details. By overcoming the limitation of dataset requirements on fully-supervised-based methods, we expect that our framework can be utilized in the medical imagingfield.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.