Abstract

Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution.

Highlights

  • Satellite remote sensing is used in various fields of application such as cartography, agriculture, environmental conservation, land use, urban planning, geology, natural hazards, hydrology, oceanography, atmosphere, climate, etc

  • In this paper we propose a single-image super-resolution method based on a deep generative adversarial network to enhance the spatial resolution of Sentinel-2 10 m bands to a resolution of 2 m

  • The proposed model, RS-Enhanced Super-Resolution Generative Adversarial Network (ESRGAN), was adapted to work with co-registered remote sensing images and it was trained using the four channels (RGB and NIR) that overlap on both satellites, with Sentinel-2 images as input and WorldView images as target

Read more

Summary

Introduction

Satellite remote sensing is used in various fields of application such as cartography, agriculture, environmental conservation, land use, urban planning, geology, natural hazards, hydrology, oceanography, atmosphere, climate, etc. The Copernicus Sentinel-2 mission [2] comprises a constellation of two polar-orbiting satellites providing high revisit time and its Multi Spectral Instrument (MSI) records data in 13 spectral bands ranging from the visible to the shortwave infrared. This sensor acquires imagery at a spatial resolution of 10 m for the red, green, blue and near infrared channels. The excellent spatial detail available may not be sufficient for certain applications In these challenging scenarios, the only solution is to purchase VHSR imagery acquired from commercial satellites or to consider lower altitude platforms, such as airplanes or drones. In recent decades, the improvement of the spatial resolution of remote sensing images has been a very active research area with the aim of saving considerable amounts of money when addressing studies requiring periodic imagery or to cover large areas with such fine spatial detail

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call