Abstract

Sentinel-2 and Sentinel-3 are two remote sensing satellites implemented by the European Space Agency for global observation. The temporal resolution of Sentinel-2 images and the spatial resolution of Sentinel-3 images may not be sufficient for local and precise monitoring. With spatiotemporal image fusion of Sentinel-2 and Sentinel-3 sensors, images with 1.4 days temporal resolution and 10-m spatial resolution can be produced. However, strong temporal change is a challenging factor for spatiotemporal fusion. The aim of this study was to compare the success of the deep learning-based DMNet model with flexible spatiotemporal data fusion (FSDAF) 2.0 and reliable and adaptive spatiotemporal data fusion (RASDF) algorithms for the spatiotemporal fusion of Sentinel-2 and Sentinel-3 images over strong temporal changes. Thus, the Kansas dataset was developed for the spatiotemporal fusion of Sentinel-2 and Sentinel-3 images in this study. It contained a large number of surface changes due to large wheat harvests. The results of this investigation show that in case of strong temporal change, the deep learning-based DMNet model performed better than the FSDAF 2.0 and RASDF methods. On the other hand, in the case of less temporal change, the FSDAF 2.0 and RASDF methods had a very high success compared with the DMNet model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call