Abstract

Spatial and temporal data fusion approaches have been developed to fuse reflectance imagery from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS), which have complementary spatial and temporal sampling characteristics. The approach relies on using Landsat and MODIS image pairs that are acquired on the same day to estimate Landsat-scale reflectance on other MODIS dates. Previous studies have revealed that the accuracy of data fusion results partially depends on the input image pair used. The selection of the optimal image pair to achieve better prediction of surface reflectance has not been fully evaluated. This paper assesses the impacts of Landsat-MODIS image pair selection on the accuracy of the predicted land surface reflectance using the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) over different landscapes. MODIS images from the Aqua and Terra platforms were paired with images from the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI) to make different pair image combinations. The accuracy of the predicted surface reflectance at 30 m resolution was evaluated using the observed Landsat data in terms of mean absolute difference, root mean square error and correlation coefficient. Results show that the MODIS pair images with smaller view zenith angles produce better predictions. As expected, the image pair closer to the prediction date during a short prediction period produce better prediction results. For prediction dates distant from the pair date, the predictability depends on the temporal and spatial variability of land cover type and phenology. The prediction accuracy for forests is higher than for crops in our study areas. The Normalized Difference Vegetation Index (NDVI) for crops is overestimated during the non-growing season when using an input image pair from the growing season, while NDVI is slightly underestimated during the growing season when using an image pair from the non-growing season. Two automatic pair selection strategies are evaluated. Results show that the strategy of selecting the MODIS pair date image that most highly correlates with the MODIS image on the prediction date produces more accurate predictions than the nearest date strategy. This study demonstrates that data fusion results can be improved if appropriate image pairs are used.

Highlights

  • A large number of remote sensing sensors with different spatial, temporal, and spectral characteristics have been launched, resulting in a dramatic improvement in the ability to acquire images of the Earth’s surface, these sensors typically represent a trade-off between spatial and temporal resolution due to technological and financial constraints [1,2,3]

  • Using six combinations of Moderate Resolution Imaging Spectroradiometer (MODIS) (Terra, Aqua, and combined) and Landsat (Landsat 7 and 8) images as data fusion sources, we find that daily MODIS observations with a smaller view zenith angle can produce better data fusion results

  • In order to give guidance for choosing the optimal image pairs, this paper evaluates the accuracy of fusing data from different combinations of MODIS-Landsat data sources using the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) data fusion model as an example

Read more

Summary

Introduction

A large number of remote sensing sensors with different spatial, temporal, and spectral characteristics have been launched, resulting in a dramatic improvement in the ability to acquire images of the Earth’s surface, these sensors typically represent a trade-off between spatial and temporal resolution due to technological and financial constraints [1,2,3]. A feasible solution is to use a data fusion method which blends images from different sensors to generate high temporal and spatial resolution data, thereby enhancing the capability of remote sensing for monitoring land surface dynamics. Among weighted function-based methods, the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) [6] is the initial attempt to blend fine- and coarse-resolution satellite data to generate synthetic high-spatial and high-temporal surface reflectance products. The optimal image pair is defined in this paper as the one to produce a smaller mean absolute difference (MAD) between the data fusion results and Landsat observations that were not used in the data fusion process These findings have potential applicability to other weighting function-based data fusion approaches

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call