Abstract

Low-dose computed tomography (CT) has proved effective in lowering radiation risk for the patients, but the resultant noise and bar artifacts in CT images can be a disturbance for medical diagnosis. The difficulty of modeling statistical features in the image domain makes it impossible for the existing methods that directly process reconstructed images to maintain the detailed texture structure of images while reducing noise, which accounts for the failure in CT diagnostic images in practical application. To overcome this defect, this paper proposes a CT image-denoising method based on an improved residual encoder-decoder network. Firstly, in our approach, the notion of recursion is integrated into the original residual encoder-decoder network to lower the algorithm complexity and boost efficiency in image denoising. The original CT images and the postrecursion result graph output after recursion are used as the input for the next recursion simultaneously, and the shallow encoder-decoder network is recycled. Secondly, the root-mean-square error loss function and perceptual loss function are introduced to ensure the texture of denoised CT images. On this basis, the tissue processing technology based on clustering segmentation is optimized considering that the images after improved RED-CNN training will still have certain artifacts. Finally, the experimental results of the TCGA-COAD clinical data set show that under the same experimental conditions, our method outperforms WGAN in average postdenoising PSNR and SSIM of CT images. Moreover, with a lower algorithm complexity and shorter execution time, our method is a significant improvement on RED-CNN and is applicable for actual scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.