Abstract

In response to rising concerns over radiation exposure in computed tomography (CT) imaging, effective denoising methods for low-dose CT (LDCT) images are crucial. In recent years, the use of deep learning techniques especially generative adversarial networks (GANs) significantly enhanced the efficiency of LDCT denoising methods, surpassing traditional methods. However, GAN-based denoising methods often face challenges in preserving structural consistency and fine details. This study introduces a novel GAN framework with three accretions to enhance the effectiveness of LDCT denoising. Firstly, our generator is designed to leverage a complementary learning scheme between image noise and image content via two distinct paths. One path focuses on exploring the anatomical information of the image, while the second path is dedicated to learning the noise pattern. This complementary learning scheme provides stable noise cancellation while preserving the maximum structural information of the image. Subsequently, we propose a novel noise-conscious mean absolute error loss to address the challenge posed by the non-stationary characteristic of CT noise. In contrast to conventional MAE loss, this loss attentively prioritizes the different parts of the image based on the local distribution of noise in that region. We also incorporate a gradient-domain loss into the loss function, which inspires the generator to preserve precise image details through explicit guidance. Finally, this study adopted a U-Net-based design for the discriminator to better regularize the model by discriminating between the clean image and the denoised image at both global and local levels. The merit of this discriminator is that it can better adapt to the non-stationary environment of GAN training and guide the generator to produce denoised images that are locally and globally consistent. Our thorough experiments using abdominal CT and lung CT datasets demonstrate the superior performance of our approach compared to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call