Abstract

Most real-world super-resolution methods require synthetic image pairs for training. However, the frequency domain gap between synthetic images and real-world images leads to artifacts and blurred reconstructions. This work points out that the main reason for the frequency domain gap is that aliasing exists in real-world images, but the degradation model used to generate synthetic images ignores the impact of aliasing on images. Therefore, a method is proposed in this work to assess aliasing in images undergoing unknown degradation by measuring the distance to their alias-free counterparts. Leveraging this assessment, a domain-translation framework is introduced to learn degradation from high-resolution to low-resolution images. The proposed framework employs a frequency-domain branch and loss function to generate synthetic images with aliasing features. Experiments validate that the proposed domain-translation framework enhances the visual quality and quantitative results compared to existing super-resolution models across diverse real-world image benchmarks. In summary, this work offers a practical solution to the real-world super-resolution problem by minimizing the frequency domain gap between synthetic and real-world images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call