Abstract
Recently, extensive studies on a generative adversarial network (GAN) have made great progress in single image super-resolution (SISR). However, there still exists a significant difference between the reconstructed high-frequency and the real high-frequency details. To address this issue, this study presents an SISR approach based on conditional GAN (SRCGAN). SRCGAN includes a generator network that generates super-resolution (SR) images and a discriminator network that is trained to distinguish the SR images from ground-truth high-resolution (HR) ones. Specifically, the discriminator network uses the ground-truth HR image as a conditional variable, which guides the network to distinguish the real images from the SR images, facilitating training a more stable generator model than GAN without this guidance. Furthermore, a residual-learning module is introduced into the generator network to solve the issue of detail information loss in SR images. Finally, the network is trained in an end-to-end manner by optimizing a perceptual loss function. Extensive evaluations on four benchmark datasets including Set5, Set14, BSD100, and Urban100 demonstrate the superiority of the proposed SRCGAN over state-of-the-art methods in terms of PSNR, SSIM, and visual effect.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.