Abstract

Spatiotemporal data fusion is a key technique for generating unified time-series images from various satellite platforms to support the mapping and monitoring of vegetation. However, the high similarity in the reflectance spectrum of different vegetation types brings an enormous challenge in the similar pixel selection procedure of spatiotemporal data fusion, which may lead to considerable uncertainties in the fusion. Here, we propose an object-based spatiotemporal data-fusion framework to replace the original similar pixel selection procedure with an object-restricted method to address this issue. The proposed framework can be applied to any spatiotemporal data-fusion algorithm based on similar pixels. In this study, we modified the spatial and temporal adaptive reflectance fusion model (STARFM), the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) and the flexible spatiotemporal data-fusion model (FSDAF) using the proposed framework, and evaluated their performances in fusing Sentinel 2 and Landsat 8 images, Landsat 8 and Moderate-resolution Imaging Spectroradiometer (MODIS) images, and Sentinel 2 and MODIS images in a study site covered by grasslands, croplands, coniferous forests, and broadleaf forests. The results show that the proposed object-based framework can improve all three data-fusion algorithms significantly by delineating vegetation boundaries more clearly, and the improvements on FSDAF is the greatest among all three algorithms, which has an average decrease of 2.8% in relative root-mean-square error (rRMSE) in all sensor combinations. Moreover, the improvement on fusing Sentinel 2 and Landsat 8 images is more significant (an average decrease of 2.5% in rRMSE). By using the fused images generated from the proposed object-based framework, we can improve the vegetation mapping result by significantly reducing the “pepper-salt” effect. We believe that the proposed object-based framework has great potential to be used in generating time-series high-resolution remote-sensing data for vegetation mapping applications.

Highlights

  • Mapping the distribution and quantity of vegetation is critical for managing natural resources, preserving biodiversity, estimating vegetation carbon storage, and understanding the Earth’s energy balance [1]

  • The spatial and temporal adaptive reflectance fusion model (STARFM) and flexible spatiotemporal data-fusion model (FSDAF) methods show similar fusion results (Figure 5b,d); while the ESTARFM method generates an image with significant color ramp differences (Figure 5c)

  • It can be based on any spatiotemporal data-fusion algorithms by replacing their original similar pixel selection method with an object-restricted method

Read more

Summary

Introduction

Mapping the distribution and quantity of vegetation is critical for managing natural resources, preserving biodiversity, estimating vegetation carbon storage, and understanding the Earth’s energy balance [1]. Because vegetation phenology information provided by multi-temporal images with a finer spatial resolution is beneficial for improving vegetation mapping accuracy [5,6], the derivation and processing of multi-temporal remote-sensing data with a high spatial resolution have been an active research area in the field of vegetation mapping. Spatiotemporal data fusion, a methodology for fusing satellite images from two different sensors, has been developed to generate data with both high spatial and temporal resolutions [9]. In spatiotemporal data fusion, imagery with a high spatial resolution but low temporal resolution is called “fine imagery”, while imagery with a low spatial resolution but high temporal resolution is called “coarse imagery” [10], which is being followed in this study

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call