Abstract
Abstract. In this paper, we investigate the use of machine-learning techniques in order to produce harmonised surface reflectances between Sentinel-2 and Pleiades images, and reduce the impact of the differences in sensors, view conditions, and atmospheric correction differences between them. We demonstrate that if a simple linear regression considering Sentinel-2 surface reflectances as the target domain can overcome this problem when both images are calibrated to Top of Canopy reflectances, the non-linearity brought by a simple Multi-Layer-Perceptron is already useful when Pleiades is corrected to Top of Atmosphere level and contributions of the atmosphere need to be learned. We also demonstrate that learning a Convolution Neural Network instead of a plain MLP can learn undesired spatial effects such as mis-registration or differences in spatial frequency content, that will affect the image quality of the corrected Pleiades product. We overcome this issue by proposing an adhoc input convolutional layer that will capture those effects and can later be unplugged during inference. Last, we also propose an architecture and loss function that is robust to unmasked clouds and produces a confidence prediction during inference.
Highlights
Problem statementThe joint use of surface reflectances from different sensors can be challenging because of differences in sensor characteristics, ground segment algorithms and their parameters, and exogenous data used for corrections
Additional differences exist in ground segment algorithms, especially when using L2A Sentinel-2 products, for which endogenous atmospheric correction parameters are estimated from dedicated spectral bands, unavailable on sensors like Pleiades
Its first competitor is the Multi-Layer Perceptron (MLP) network, presented in figure 2(a), which consists in batch normalisation followed by two fully connected hidden layers with 320 units with a leaky-ReLU activation, and an output layer with units and a hyperbolic tangent activation, followed by a skip connection
Summary
The joint use of surface reflectances from different sensors can be challenging because of differences in sensor characteristics, ground segment algorithms and their parameters, and exogenous data used for corrections All those factors can affect the coherency between surface reflectances and create unwanted artefacts in operations leveraging direct comparison or statistical learning. To tackle this issue, the standard physicsbased method includes careful modelling of effects induced by sensor differences as well as the use of common algorithms and parameters for both sensors, as demonstrated in (Claverie et al, 2018) for the constitution of an harmonised Sentinel-2 and Landsat 8 dataset. Additional differences exist in ground segment algorithms, especially when using L2A Sentinel-2 products, for which endogenous atmospheric correction parameters are estimated from dedicated spectral bands, unavailable on sensors like Pleiades.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.