Abstract

Limited-angle computed tomography (CT) image reconstruction is a challenging problem in the field of CT imaging. In some special applications, limited by the geometric space and mechanical structure of the imaging system, projections can only be collected with a scanning range of less than 90°. We call this kind of serious limited-angle problem the ultra-limited-angle problem, which is difficult to effectively alleviate by traditional iterative reconstruction algorithms. With the development of deep learning, the generative adversarial network (GAN) performs well in image inpainting tasks and can add effective image information to restore missing parts of an image. In this study, given the characteristic of GAN to generate missing information, the sinogram-inpainting-GAN (SI-GAN) is proposed to restore missing sinogram data to suppress the singularity of the truncated sinogram for ultra-limited-angle reconstruction. We propose the U-Net generator and patch-design discriminator in SI-GAN to make the network suitable for standard medical CT images. Furthermore, we propose a joint projection domain and image domain loss function, in which the weighted image domain loss can be added by the back-projection operation. Then, by inputting a paired limited-angle/180° sinogram into the network for training, we can obtain the trained model, which has extracted the continuity feature of sinogram data. Finally, the classic CT reconstruction method is used to reconstruct the images after obtaining the estimated sinograms. The simulation studies and actual data experiments indicate that the proposed method performed well to reduce the serious artifacts caused by ultra-limited-angle scanning.

Highlights

  • X-ray computed tomography (CT) imaging has been successfully applied in medicine, biology, industry, and other fields [1]

  • FRef (i) − f (i) i=1 where fRef denotes the ground truth CT images, f denotes the images reconstructed from the output sinograms by the SI-generative adversarial network (GAN), i is the pixel number in the image, and N is the total number of pixels in the image

  • In the loss function L(G, D), the parameters λ1 and λ2 together determine the optimal proportion of the sinogram and reconstruction losses in the whole training process of the SI-GAN. λ2 is an important

Read more

Summary

Introduction

X-ray computed tomography (CT) imaging has been successfully applied in medicine, biology, industry, and other fields [1]. Image reconstruction from limited-angle projections can be treated as an inverse problem. Sparse optimization-based image reconstruction methods have recently gained much attention. Sparse optimization‐based image reconstruction methods have recently gained much attention for limited-angle image reconstruction [10,11,12,13]. The representative method is the total variation for limited‐angle image reconstruction [10,11,12,13]. As shown in, exact reconstructed images are difficult to obtain under ultra-limited-angle scanning. Other TV-based algorithms utilize some image prior minimization (ADTVM) [11]. Other TV‐based algorithms utilize some image prior information [17,18,19], serious artifacts in the ultra-limited-angle problem are still difficult to reduce. In the ultra-limited-angle problem, the singularity near zero, making image reconstruction difficult.

Examples by total total variation variation
CT Imaging Theory
Network Design
Schematic
Network
Image Reconstruction from Estimated Sinogram
Digital CT Image Study
Anthropomorphic Head Phantom Study
Performance Evaluation
Comparison Methods
Parameter Selection of Loss Function
Parameter
Simulation Study
Sinogram
Real Data Study
13. To reveal texture
Discussion and Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.