Abstract

Spatiotemporal data fusion algorithms have been developed to fuse satellite imagery from sensors with different spatial and temporal resolutions and generate predicted imagery. In this study, we compare the predictions of three spatiotemporal data fusion algorithms in blending Landsat-8/OLI and Terra-Aqua/MODIS images for mapping soybean and corn under five classification scenarios. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), and Flexible Spatiotemporal Data Fusion (FSDAF) algorithms were compared to generate images for the 2016/2017 summer crop-year. Classifications including phenological metrics extracted from FSDAF- and STARFM-predicted EVI time series had overalls accuracies higher than the other scenarios, 93.11% and 91.33%, respectively. The results show that phenological metrics extracted from predicted images are an interesting alternative to overcome cloud cover frequency limitations for soybean and corn mapping in tropical areas.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call