Spatiotemporal fusion is commonly used in remote sensing to generate fine spatial–temporal resolution images. Among most present spatiotemporal fusion methods, the registration error results in a mismatch in the reflectance and class abundance information within the MODIS pixel to be analyzed during the spatiotemporal fusion. In this article, a new spatiotemporal fusion model, robust flexible spatiotemporal data fusion (RFSDAF), is proposed; in other words, this method is robust to registration errors. The RFSDAF method uses a multiscale fusion strategy that is adaptive to different degrees of coregistration error. It incorporates multiscale information, which extends the analysis of the reflectance and class fraction from per-MODIS pixel to inter-MODIS pixels, and it is robust to coregistration errors. This method does not require high-accuracy preprocession coregistration algorithms and can automatically reduce the effect of the registration error to a great extent. The RFSDAF method is compared with four spatiotemporal image fusion algorithms, and the effectiveness of the method in resolving the registration error is demonstrated using both a simulated dataset and two actual satellite datasets. The experimental results show that the RFSDAF method can better reduce the impact of the registration error than the spatial and temporal adaptive reflectance fusion model (STARFM) and FSDAF, which adopts the phase cross-correlation algorithm to preprocess and coregister the MODIS–Landsat image through real image experiments. Consequently, the RFSDAF method has a great robustness for real spatiotemporal image fusion with registration errors and has the potential for monitoring land surface dynamics.
Read full abstract