Abstract

AbstractNowadays, batch normalization (BN) has become the core component of deep learning. Thanks to the advantage of stable training, an image super-resolution reconstruction model, enhanced super-resolution generative adversarial networks (ESRGAN), and an image super-resolution reconstruction algorithm use BN to help its stable training. However, BN normalizes different types of information. This causes artifacts in the generated super-resolution image. Based on this, the generator of ESRGAN removes BN and the discriminator retains BN. However, BN in the discriminator will also normalize the information of different images, thus affecting the discriminator judgment. Motivated by this, we replace BN in the discriminator with layer normalization (LN), instance normalization (IN), group normalization (GN), and representative batch normalization (RBN) without adding any normalization operation. After a large number of experiments, ESRGAN reaches the state of the art when GN (PSNR:27.72, SSIM:0.8316) is used in the discriminator on the Set5 dataset.KeywordsImage super-resolutionGenerative adversarial networksNormalization

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.