Abstract

Spatiotemporal fusion methods are considered a useful tool for generating multi-temporal reflectance data with limited high-resolution images and necessary low-resolution images. In particular, the superiority of sparse representation-based spatiotemporal reflectance fusion model (SPSTFM) in capturing phenology and type changes of land covers has been preliminarily demonstrated. Meanwhile, the dictionary training process, which is a key step in the sparse learning-based fusion algorithm, and its effect on fusion quality are still unclear. In this paper, an enhanced spatiotemporal fusion scheme based on the single-pair SPSTFM algorithm has been proposed through improving the process of dictionary learning, and then evaluated using two actual datasets, with one representing a rural area with phenology changes and the other representing an urban area with land cover type changes. The validated strategy for enhancing the dictionary learning process is divided into two modes to enlarge the training datasets with spatially and temporally extended samples. Compared to the original learning-based algorithm and other employed typical single-pair-based fusion models, experimental results from the proposed fusion method with two extension modes show improved performance in modeling reflectance using the two preceding datasets. Furthermore, the strategy with temporally extended training samples is more effective than the strategy with spatially extended training samples for the land cover area with phenology changes, whereas it is opposite for the land cover area with type changes.

Highlights

  • Given the growing application requirements for a variety of refined and high-frequency monographic studies, such as land use and cover change [1], ecological environment monitoring [2], forest and pasture [3], oceanographic survey [4], and disaster monitoring [5], possible solutions for frequent acquisition of high-spatial-resolution remotely sensed data have been widely proposed

  • The MODIS reflectance product is provided by combining the green channel of the MOD09A1 and the red and NIR channels of the MOD09Q1 that are directly downloaded from the Land Processes Distributed Active Archive Center (LPDAAC)

  • An enhanced fusion scheme based on the single-pair sparse learning fusion model is proposed by improving the dictionary training process, and its evaluation strategy is designed by employing the spatially and the temporally extended training samples in this paper

Read more

Summary

Introduction

Given the growing application requirements for a variety of refined and high-frequency monographic studies, such as land use and cover change [1], ecological environment monitoring [2], forest and pasture [3], oceanographic survey [4], and disaster monitoring [5], possible solutions for frequent acquisition of high-spatial-resolution remotely sensed data have been widely proposed. One significant attempt among current works presented a radical solution that involves the progressively increasing launch of various high-quality remote sensors, some of which adopt high spatial (e.g., WorldView-3/4 and Gaojing-1/2 with 0.31 and 0.5 m resolution, respectively), temporal (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS) and other meteorological satellites), and spectral resolutions (e.g., EO-1 Hyperion and Gaofen-3, both with 30 m resolution) or even high quantities (e.g., the Gaojing project, which will send 16 similar optical satellites into space no later than 2020). The image fusion strategy, especially in the spatial and temporal dimension, provides another effective way to synthesize an optimized image by combining spatial and temporal information from multi-source remote sensors that separately occupy different spatiotemporal characteristics (e.g., a high-spatial and low-temporal resolution image and a low-spatial and high-temporal resolution image)

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.