Abstract

The unprecedented availability of remote sensing data from different complementary Sentinel missions provides increasing opportunities to alleviate the spatial limitations of Sentinel-3 (S3) from an intersensor perspective. Nonetheless, effectively exploiting such intersensor synergies still raises important challenges for super-resolution (SR) algorithms in terms of operational data availability, sensor alignment and substantial resolution changes, among others. In this scenario, this article sets a new SR framework for spatially enhancing S3 ocean and land color instrument (OLCI) products by taking advantage of the higher spatial resolution of the Sentinel-2 (S2) multispectral instrument (MSI). To achieve this goal, we initially study some of the most important deep learning-based approaches. Then, we define a novel Level-4 SR framework which integrates a new convolutional neural network specially designed for super-resolving OLCI data. In contrast to other networks, the proposed SR architecture (termed as SRS3) employs a dense multireceptive field together with a residual channel attention mechanism to relieve the particularly low spatial resolution of OLCI while extracting more discriminating features for the large spatial resolution differences with respect to MSI. The experimental part of the work, conducted using ten coupled OLCI and MSI operational data, reveals the suitability of the presented Level-4 SR framework within the Copernicus programme context as well as the advantages of the proposed architecture with respect different state-of-the-art models when spatially enhancing OLCI products. The related codes will be publicly available at https://github.com/rufernan/SRS3 .

Highlights

  • O VER the last decades, the technological evolution of air-borne and space-borne image acquisition instruments has allowed to considerably improve the spatial resolution of multispectral (MS) sensors in order to face new challenges and societal needs by means of remote sensing (RS) images [1]

  • We propose using the intersensor registration technique described in [44] to geometrically refine the ocean and land color instrument (OLCI) data using the multispectral instrument (MSI) sensor as spatial reference

  • An intersensor framework is presented to super-resolve operational OLCI data by means of the higher spatial resolution of the S2 instrument

Read more

Summary

INTRODUCTION

O VER the last decades, the technological evolution of air-borne and space-borne image acquisition instruments has allowed to considerably improve the spatial resolution of multispectral (MS) sensors in order to face new challenges and societal needs by means of remote sensing (RS) images [1]. We aim at designing an end-to-end CNN-based SR architecture (termed as SRS3) optimized for upscaling S3 data products with particular focus on the following aspects: 1) spatial improvements with respect to different standard CNN-based SR networks; 2) preservation of discriminating features considering the low spatial resolution of OLCI; 3) general applicability for operational data by covering multiple scaling scenarios according to spatial reference given by MSI To achieve this goal, the newly proposed architecture effectively integrates a dense multireceptive field together with a residual channel attention mechanism to relieve OLCI’s spatial limitations as well as MSI’s vast spatial differences.

SR in Remote Sensing
CNN-Based SR
METHODOLOGY
Sentinel Data Processing
Level-4 Image Fusion
CNN-Based SR Training
SR Product Generation
Datasets
Experimental Settings
Results
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call