Abstract

Spatiotemporal image fusion is a potential way to resolve the constraint between the spatial and temporal resolutions of satellite images and has been developed rapidly in recent years. However, two key challenges related to fusion accuracy remain: a) reducing the uncertainty of image fusion caused by sensor differences and b) addressing strong temporal changes. To solve the above two issues, this paper presents the newly proposed Reliable and Adaptive Spatiotemporal Data Fusion (RASDF) method. In RASDF, the effects of four kinds of sensor differences on fusion are analyzed systematically. A reliability index is therefore proposed to describe the spatial distribution of the reliability in input data for image fusion. An optimization strategy based on the spatial distribution of the reliability quantified by the index is developed to improve the robustness of the fusion. In addition, an adaptive global unmixing model and an adaptive local unmixing model are constructed and utilized collaboratively to enhance the ability to retrieve strong temporal changes. The performance and robustness of RASDF were compared with six representative fusion methods for both real and simulated datasets covering both homogeneous and heterogeneous sites. Experimental results indicated that RASDF achieves a better performance and provides a more reliable image fusion solution in terms of reducing the impact of sensor differences on image fusion and retrieving strong temporal changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call