Abstract

The deep learning models for the Single Image Super-Resolution (SISR) task have found success in recent years. However, one of the prime limitations of existing deep learning-based SISR approaches is that they need supervised training. Specifically, the Low-Resolution (LR) images are obtained through known degradation (for instance, bicubic downsampling) from the High-Resolution (HR) images to provide supervised data as an LR-HR pair. Such training results in a domain shift of learnt models when real-world data is provided with multiple degradation factors not present in the training set. To address this challenge, we propose an unsupervised approach for the SISR task using Generative Adversarial Network (GAN), which we refer to hereafter as DUS-GAN. The novel design of the proposed method accomplishes the SR task without degradation estimation of real-world LR data. In addition, a new human perception-based quality assessment loss, i.e., Mean Opinion Score (MOS), has also been introduced to boost the perceptual quality of SR results. The pertinence of the proposed method is validated with numerous experiments on different reference-based (i.e., NTIRE Real-world SR Challenge validation dataset) and no-reference based (i.e., NTIRE Real-world SR Challenge Track-1 and Track-2) testing datasets. The experimental analysis demonstrates committed improvement from the proposed method over the other state-of-the-art unsupervised SR approaches, both in terms of subjective and quantitative evaluations on different reference metrics (i.e., LPIPS, PI-RMSE graph) and no-reference quality measures such as NIQE, BRISQUE and PIQE. We also provide the implementation of the proposed approach (https://github.com/kalpeshjp89/DUSGAN) to support reproducible research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.