Abstract

Super-resolution is an essential task in remote sensing. It can enhance low-resolution remote sensing images and benefit downstream tasks such as building extraction and small object detection. However, existing remote sensing image super-resolution methods may fail in many real-world scenarios because they are trained on synthetic data generated by a single degradation model or on a limited amount of real data collected from specific satellites. To achieve super-resolution of real-world remote sensing images with different qualities in a unified framework, we propose a practical degradation model and a kernel-aware network (KANet). The proposed degradation model includes blur kernels estimated from real images and blur kernels generated from pre-defined distributions, which improves the diversity of training data and covers more real-world scenarios. The proposed KANet consists of a kernel prediction subnetwork and a kernel-aware super-resolution subnetwork. The former estimates the blur kernel of each image, making it possible to cope with real images of different qualities in an adaptive way. The latter iteratively solves two subproblems, degradation and high-frequency recovery, based on unfolding optimization. Furthermore, we propose a kernel-aware layer to adaptively integrate the predicted blur kernel into super-resolution process. The proposed KANet achieves state-of-the-art performance for real-world image super-resolution and outperforms the competing methods by 0.2–0.8 dB in the peak signal-to-noise ratio (PSNR). Extensive experiments on both synthetic and real-world images demonstrate that our approach is of high practicability and can be readily applied to high-resolution remote sensing applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call