Abstract

Sentinel-2 satellites can provide free optical remote-sensing images with a spatial resolution of up to 10 M, but the spatial details provided are not enough for many applications, so it is worth considering improving the spatial resolution of Sentinel-2 satellites images through super-resolution (SR). Currently, the most effective SR models are mainly based on deep learning, especially the generative adversarial network (GAN). Models based on GAN need to be trained on LR–HR image pairs. In this paper, a two-step super-resolution generative adversarial network (TS-SRGAN) model is proposed. The first step is having the GAN train the degraded models. Without supervised HR images, only the 10 m resolution images provided by Sentinel-2 satellites are used to generate the degraded images, which are in the same domain as the real LR images, and then to construct the near-natural LR–HR image pairs. The second step is to design a super-resolution generative adversarial network with strengthened perceptual features, to enhance the perceptual effects of the generated images. Through experiments, the proposed method obtained an average NIQE as low as 2.54, and outperformed state-of-the-art models according to other two NR-IQA metrics, such as BRISQUE and PIQE. At the same time, the comparison of the intuitive visual effects of the generated images also proved the effectiveness of TS-SRGAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call