Abstract

Recently, supervised deep super-resolution (SR) networks have achieved great success in both accuracy and texture generation. However, most methods train in the dataset with a fixed kernel (such as bicubic) between high-resolution images and their low-resolution counterparts. In real-life applications, pictures are always disturbed with additional artifacts, e.g., non-ideal point-spread function in old film photos, and compression loss in cellphone photos. How to generate a satisfactory SR image from the specific prior single low-resolution (LR) image is still a challenging issue. In this paper, we propose a novel unsupervised method named unsupervised single-image SR with multi-gram loss (UMGSR) to overcome the dilemma. There are two significant contributions in this paper: (a) we design a new architecture for extracting more information from limited inputs by combining the local residual block and two-step global residual learning; (b) we introduce the multi-gram loss for SR task to effectively generate better image details. Experimental comparison shows that our unsupervised method in normal conditions can attain better visual results than other supervised SR methods.

Highlights

  • Super-resolution (SR) based on deep learning (DL) has received much attention from the community [1,2,3,4,5,6,7]

  • Convolutional neural networks (CNN)-relevant models have consistently resulted in significant improvement in SR generation

  • In this paper, we propose a new unsupervised single-image DL-SR method with multi-gram loss (UMGSR)

Read more

Summary

Introduction

Super-resolution (SR) based on deep learning (DL) has received much attention from the community [1,2,3,4,5,6,7]. Many high-resolution (HR)–low-resolution (LR) image pairs are the building blocks for DL-SR methods in a supervised way. The SR training uses the HR image as the supervised information to guide the learning process. Most DL-SR methods train on the dataset with fixed kernel between HR and LR images. This fixed kernel assumption creates a fairly unrealistic situation limited in certain circumstances. When a picture violates the fixed spread kernel of training data, its final performance decreases in a large margin. This phenomenon is highlighted in ZSSR [11].

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.