Abstract

Computed tomography (CT) imaging technology has become an indispensable auxiliary method in medical diagnosis and treatment. In mitigating the radiation damage caused by X-rays, low-dose computed tomography (LDCT) scanning is becoming more widely applied. However, LDCT scanning reduces the signal-to-noise ratio of the projection, and the resulting images suffer from serious streak artifacts and spot noise. In particular, the intensity of noise and artifacts varies significantly across different body parts under a single low-dose protocol. To improve the quality of different degraded LDCT images in a unified framework, we developed a generative adversarial learning framework with a dynamic controllable residual. First, the generator network consists of the basic subnetwork and the conditional subnetwork. Inspired by the dynamic control strategy, we designed the basic subnetwork to adopt a residual architecture, with the conditional subnetwork providing weights to control the residual intensity. Second, we chose the Visual Geometry Group Network-128 (VGG-128) as the discriminator to improve the noise artifact suppression and feature retention ability of the generator. Additionally, a hybrid loss function was specifically designed, including the mean square error (MSE) loss, structural similarity index metric (SSIM) loss, adversarial loss, and gradient penalty (GP) loss. The results obtained on two datasets show the competitive performance of the proposed framework, with a 3.22 dB peak signal-to-noise ratio (PSNR) margin, 0.03 SSIM margin, and 0.2 contrast-to-noise ratio margin on the Challenge data and a 1.0 dB PSNR margin and 0.01 SSIM margin on the real data. Experimental results demonstrated the competitive performance of the proposed method in terms of noise decrease, structural retention, and visual impression improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call