Abstract

Although deep convolutional neural networks (DCNNs) and generative adversarial networks (GANs) have achieved remarkable success in image denoising, they have been facing a severe problem of the trade-off between removing noise and artifacts on the one hand, and preserving details on the other. In comparison with conventional DCNNs, GANs might be better in balancing between erasing different types of noise and recovering texture details. However, they often generate fake details and unexpected artifacts in the image owing to the instability of their discriminator during training. In this study, we propose a hierarchical generative adversarial network (HI-GAN) that adopts useful solutions for handling these serious problems of image denoising. Unlike the conventional GAN, the proposed HI-GAN comprises three main generators. The first generator tackles the problem of losing high-frequency features such as edges and texture. This generator was trained together with the discriminator to improve its ability to preserve essential details. The second generator focuses on eliminating the effect of instabilities caused by the discriminator and restoring low-frequency features in the noisy image . Both generators use different criteria to evaluate the denoising performance, and none of them outperformed the other. Then, a third generator is employed to help them cooperate more effectively and boost reconstruction performance. Moreover, to improve the effectiveness of the generators, we also propose a novel boosted residual dense UNet, which is designed to maximize information flow to pass through all convolutional layers in the network. In addition, we propose the AdaRaGAN loss function that effectively prevents the instability of the discriminator of the HI-GAN and improves the denoising performance. The experimental results of the experiments involving challenging datasets of real-world noisy images show that our proposed method is superior to other state-of-the-art denoisers in terms of quantitative metrics and visual quality. Our source codes and datasets for HI-GAN are available at https://github.com/ZeroZero19/HI-GAN.git.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.