Abstract

Abstract. Interest in data fusion, for remote-sensing applications, continues to grow due to the increasing importance of obtaining data in high resolution both spatially and temporally. Applications that will benefit from data fusion include ecosystem disturbance and recovery assessment, ecological forecasting, and others. This paper introduces a novel spatiotemporal fusion approach, the wavelet-based Spatiotemporal Adaptive Data Fusion Model (WSAD-FM). This new technique is motivated by the popular STARFM tool, which utilizes lower-resolution MODIS imagery to supplement Landsat scenes using a linear model. The novelty of WSAD-FM is twofold. First, unlike STARFM, this technique does not predict an entire new image in one linear step, but instead decomposes input images into separate "approximation" and "detail" parts. The different portions are fed into a prediction model that limits the effects of linear interpolation among images. Low-spatial-frequency components are predicted by a weighted mixture of MODIS images and low-spatial-frequency components of Landsat images that are neighbors in the temporal domain. Meanwhile, high-spatialfrequency components are predicted by a weighted average of high-spatial-frequency components of Landsat images alone. The second novelty is that the method has demonstrated good performance using only one input Landsat image and a pair of MODIS images. The technique has been tested using several Landsat and MODIS images for a study area from Central North Carolina (WRS-2 path/row 16/35 in Landsat and H/V11/5 in MODIS), acquired in 2001. NDVI images that were calculated from the study area were used as input to the algorithm. The technique was tested experimentally by predicting existing Landsat images, and we obtained R2 values in the range 0.70 to 0.92 for estimated Landsat images in the red band, and 0.62 to 0.89 for estimated NDVI images.

Highlights

  • Data fusion is the process of merging or combining data from several sources

  • This goal is possible by fusing imagery from a high-resolution platform such as Landsat, with imagery that is lower in spatial resolution but higher in temporal resolution

  • The section of this paper provides additional background related to data fusion, and gives an overview of wavelet-based processing as a means of decomposing an image into low- and high-resolution components

Read more

Summary

INTRODUCTION

Data fusion is the process of merging or combining data from several sources. For certain applications, data fusion makes it possible to obtain results of better quality and/or quantity than is feasible from a single data source alone. This paper is concerned with the problem of combining imagery from different satellites, obtained at different times, in order to improve the effective spatiotemporal resolution for a given part of the earth’s surface This goal is possible by fusing imagery from a high-resolution platform such as Landsat, with imagery that is lower in spatial resolution but higher in temporal resolution. The enhanced approach provides a solution for the heterogeneous pixels, but it still cannot accurately predict short-term transient changes that have not been captured in any of the observed fine-resolution images Another data fusion system is Spatial Temporal Adaptive Algorithm for mapping Reflectance Change (STAARCH) (Hilker et al, 2009a), which was developed to address a different limitation of STARFM.

BACKGROUND
Multiresolution Analysis
PROPOSED FUSION MODEL
EXPERIMENTAL RESULTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call