Abstract

Spatiotemporal fusion (STF) aims at generating remote-sensing data with both high spatial and temporal resolution. In the literature, one of the most widely used strategies to accomplish this goal is to fuse high temporal resolution images collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) with images with finer spatial resolution than those provided by MODIS (e.g., those collected by other satellite instruments such as Landsat or Sentinel-2). Current STF methods generally fuse an upsampled MODIS image with finer spatial resolution images. This leads to two main problems. First of all, the model uncertainty errors (resulting from the ill-posed upsampling problem) will be propagated into the fusion results, leading to spatial and spectral distortion. Furthermore, the spatial details of the upsampled MODIS image may be significantly different from those of the finer spatial resolution images, making the STF problem even more challenging. In order to tackle these issues, in this work, we develop a new linear regression-based STF strategy (LiSTF), which performs the reconstruction from a MODIS-like image (instead of from an upsampled MODIS image), thus reducing the model uncertainty errors and preserving better the spatial information. The MODIS-like images are built from the finer spatial resolution images via downsampling. Our experimental results, conducted using two publicly available datasets of Landsat–MODIS image pairs and one publicly available dataset of Sentinel–MODIS image pairs, reveal that our newly proposed LiSTF approach can significantly enhance the quantitative and qualitative performance of STF, particularly in terms of preserving the spatial information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call