Abstract

Abstract. Multispectral satellite imagery is the primary data source for monitoring land cover change and characterizing land cover at the global scale. However, the accuracy of land cover classification is often constrained by the spatial and temporal resolutions of the acquired satellite images. This paper proposes a novel spatiotemporal fusion method based on deep convolutional neural networks under the application background of massive remote sensing data, as well as the large spatial resolution gaps between MODIS and Sentinel images. The training was taken on the public SEN12MS dataset, while the validation and testing were conducted using ground truth data from the 2020 IEEE GRSS data fusion contest. As a result of data fusion, the synthesized land cover map was more accurate than the corresponding MODIS-derived land cover map, with an enhanced spatial resolution of 10 meters. The ensemble approach can be implemented for improving data quality when generating a global land cover product from coarse satellite imageries.

Highlights

  • Remote sensed satellite imagery is the primary data source for monitoring land cover change (LCC) and characterizing land cover at the global scale (Song et al, 2017)

  • Global-scale land cover mapping at coarse resolution has been driven by the availability of Moderate resolution Imaging Spectroradiometer (MODIS) dataset, previous researches have conducted spatiotemporal fusion to blend MODIS and Landsat data in order to obtain improved classification results with a higher spatial resolution of 30m (Gevaert and Garcıa-Haro, 2015, Wang et al, 2015, Chen et al, 2017)

  • With the aim of providing enhanced land cover mapping through the fusion of multisource satellite data, this study proposes an end-to-end deep learning method to enhance the spatial resolution of MODISderived land cover maps, by integrating the maps, Synthetic-aperture radar (SAR) images derived from Sentinel-1, and multispectral images derived from Sentinel-2

Read more

Summary

INTRODUCTION

Remote sensed satellite imagery is the primary data source for monitoring land cover change (LCC) and characterizing land cover at the global scale (Song et al, 2017). Sentinel-1 and Sentinel-2 are two recently launched satellite constellations which provide higher temporal resolution (3 – 5 days) and higher spatial resolution (5 – 10 meters) than Landsat satellites These advantages are fundamental in a spatiotemporal fusion process for improving land cover classification. To the best of our knowledge, no deep learning-based model has yet been introduced to conduct spatiotemporal fusion to blend MODIS data and Sentinel satellite images. With the aim of providing enhanced land cover mapping through the fusion of multisource satellite data, this study proposes an end-to-end deep learning method to enhance the spatial resolution of MODISderived land cover maps, by integrating the maps (with original spatial resolution of 500 m), Synthetic-aperture radar (SAR) images derived from Sentinel-1, and multispectral images derived from Sentinel-2. To deal with weakly annotated ground truth labels, additional module was embedded in the model, and it automatically updates the coarse labels based on the intermediate predictions on the training sets

DeepLabV3 Plus
Pre-processing of Sentinel-1 SAR images
Data Augmentation
Label Refinement
Implementation details
Experiment results
Visualized comparison
Foreseeable limitations
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.