Abstract

Spatiotemporal fusion (STF) is considered a feasible and cost-effective way to deal with the trade-off between the spatial and temporal resolution of satellite sensors, and to generate satellite images with high spatial and high temporal resolutions. This is achieved by fusing two types of satellite images, i.e., images with fine temporal but rough spatial resolution, and images with fine spatial but rough temporal resolution. Numerous STF methods have been proposed, however, it is still a challenge to predict both abrupt landcover change, and phenological change, accurately. Meanwhile, robustness to radiation differences between multi-source satellite images is crucial for the effective application of STF methods. Aiming to solve the abovementioned problems, in this paper we propose a hybrid deep learning-based STF method (HDLSFM). The method formulates a hybrid framework for robust fusion with phenological and landcover change information with minimal input requirements, and in which a nonlinear deep learning-based relative radiometric normalization, a deep learning-based superresolution, and a linear-based fusion are combined to address radiation differences between different types of satellite images, landcover, and phenological change prediction. Four comparative experiments using three popular STF methods, i.e., spatial and temporal adaptive reflectance fusion model (STARFM), flexible spatiotemporal data fusion (FSDAF), and Fit-FC, as benchmarks demonstrated the effectiveness of the HDLSFM in predicting phenological and landcover change. Meanwhile, HDLSFM is robust for radiation differences between different types of satellite images and the time interval between the prediction and base dates, which ensures its effectiveness in the generation of fused time-series data.

Highlights

  • Remote sensing data with high temporal and spatial resolutions have been used in various applications, such as vegetation phenology monitoring [1,2,3], landcover change (LC)detection [4,5], landcover type classification [6,7,8], and carbon sequestration modeling [9].it is still a challenge for sensors of a single type to provide remote sensing data with both fine spatial and temporal resolutions due to technological and financial limitations [10]

  • It can be seen that spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC exhibited the worse visual artifacts, with blurry spatial details compared to the other fusion methods, especially in the inundated area

  • The worse quantitative results of the two Spatiotemporal fusion (STF) methods, both in the complete Lower Gwydir Catchment (LGC) site (Figure 6) and the sub area (Table 3), confirmed the unsuitability of these methods for the prediction of significant LC, which is mainly due to the insufficiency of the linear assumption in LC prediction

Read more

Summary

Introduction

Remote sensing data with high temporal and spatial resolutions have been used in various applications, such as vegetation phenology monitoring [1,2,3], landcover change (LC). Detection [4,5], landcover type classification [6,7,8], and carbon sequestration modeling [9] It is still a challenge for sensors of a single type to provide remote sensing data with both fine spatial and temporal resolutions due to technological and financial limitations [10]. Landsat images have a spatial resolution of 30 m, which is suitable for heterogeneous areas; the satellite’s relatively long revisit period of 16 days makes it unsuitable for capturing rapid land surface temporal change. Various STF methods have been developed and widely utilized in

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call