Abstract
Compressed sensing (CS) image reconstruction in CT suffers from the drawbacks such as 1) appearance of staircase artifacts and 2) loss in image textures and smooth intensity changes. These drawbacks stem from the fact that CS is based on approximating the image by a piecewise-constant function. To overcome this drawback, we have already proposed a framework to improve image quality in CS using deep learning. In this framework, FBP reconstructed image and CS (TV or Nonlocal TV) reconstructed image are inputted to CNN with two input channels and single output channel, and a final reconstructed image is obtained by the output of CNN. Parameters (weight and bias) of CNN together with a regularization parameter of CS are estimated by minimizing an average least-squares loss function by using learning data, i.e. a set of triplet of degraded FBP reconstruction, CS reconstruction, and answer image. In this paper, this framework is extended to 3-D image reconstruction in helical cone-beam CT operated with lowdose scanning protocol. Parameters (weight and bias) of CNN together with a regularization parameter of CS are estimated by minimizing an average least-squares loss function by using learning data, i.e. a set of triplet of degraded FBP reconstruction, CS reconstruction, and answer image. In this paper, this framework was extended to 3-D image reconstruction in helical cone-beam CT operated with lowdose scanning protocol. The extension was done in the following way. First, we prepare N different 2-D denoising CNN (CNN<sub>1</sub>, CNN<sub>2</sub>, . . . , CNNN ) dependent on the slice position n. Each slice of the short-scan FDK reconstruction without denoising y<sub>i</sub> and with 3-D TV (or Nonlocal TV) denoising z<sub>i</sub> are inputted to CNNn with the closest slice index n, which yields a corresponding output image for each slice x<sub>i</sub> . The final reconstructed image is obtained by stacking every slice x<sub>i</sub> (i = 1, 2, . . . , I).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.