Abstract
The accuracy and efficiency of single-image super-resolution (SR) using techniques based on convolutional neural networks have recently shown much improvement. However, most of the existing algorithms aim at improving the peak signal-to-noise ratio by minimizing the mean squared error between the ground-truth images and the generated SR images. This leads to a lack of high-frequency information and nonconformance with the perception of human eyes. To reconstruct realistic natural images in SR with large up-sampling factors, we combine the benefits of some recent approaches and propose a method based on autoencoding adversarial networks. The proposed architecture includes a generator, which is a symmetric encode–decode network used to extract feature maps and recover high-resolution images, and a conditional discriminator, which is used to determine whether the generated image is from the real distribution or not. In addition, we extract high-level features from a pretrained network to optimize the perceptual loss and make the output more precise. Compared with several state-of-the-art methods, our proposed method shows outstanding performance in recovering fine texture details. The mean opinion score shows that our method yields results that are more satisfactory to human perception than the other methods under comparison.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.