Abstract

Despite remarkable progress in single-image super-resolution based on neural networks, the results are not ideal when applied to real-world images, because the real-world degradation process is unknown and complex. The emergence of some optical zoom datasets shows that neural networks still achieve good results on real-world images as long as the low-resolution images used for training have similar features and distributions with the real-world images. However, obtaining such optical zoom datasets is complicated and the datasets are only applicable to specific cameras and shooting conditions. By studying the optical zoom datasets, we propose a super-resolution image degradation model consisting of blurring, frequency domain processing, adding noise and downsampling. Specifically, blurring uses a blur kernel with a wave-like shape inferred from the point spread function, which produces the artifacts like real-world images. Frequency domain processing simulates the frequency domain aliasing of real-world images, such as jagged edges and background stripes. Experiments demonstrate that the new degradation model achieves visual effects comparable to optical zoom datasets. Existing high-resolution datasets can be converted to “optical zoom datasets” by the degradation model, where the synthetic low-resolution images have real-world image features, thereby extending super-resolution methods to real-world images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call