Abstract

Due to the challenges of collecting paired low-resolution (LR) and high-resolution (HR) images in real-world scenarios, most existing deep convolutional neural network (CNN)-based single image super-resolution (SR) models are trained with artificially synthesized LR-HR image pairs. However, the domain gap between the synthetic data for model training and the realistic data for testing degrades SR performance significantly, which discourages the application of SR models in practice. One possible solution is to learn from unpaired real-world LR and HR images for their accessibility. Predominant strategies are mainly based on unsupervised domain translation. Despite great advances, there are still noticeable domain gaps between the realistic-like/synthetic-like images generated by unpaired translation and the true realistic/synthetic ones. To address this problem, this letter proposes an effective unsupervised SR framework based on dual synthetic-to-realistic and realistic-to-synthetic translations, namely DTSR. Specifically, to bridge the domain gap between testing and training data, the SR model is optimized using HR images and their realistic-like LR counterparts produced by the synthetic-to-realistic translation. In turn, we propose to narrow the domain gap further via applying the realistic-to-synthetic translation to realistic LR images prior to super-resolving, which also makes the SR model super-resolve simpler examples in testing relative to model training. Moreover, focal frequency and bilateral filtering losses are particularly introduced into DTSR for better details restoration and artifacts suppression. Extensive experiments show that our DTSR outperforms several state-of-the-art models in terms of both quantitative and qualitative comparisons.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.