Abstract

Multi-temporal remote sensing imagery has been regarded as an effective tool to monitor cropland. But optical sensors often miss key stages for crop growth because of clouds, which poses challenges to many studies. The synergistic of SAR and optical data is expected to lift this problem, especially in areas with persistent cloud cover. However, due to the different characteristics of optical and SAR sensors, it is difficult to build a relationship between the two with most existing methods, let alone construct the long-time correlations to fill optic observation gaps using SAR data. Inspired by deep learning, this study presents a novel strategy to learn the relationship between optical and SAR time series based on the sequence of contextual information. To be specific, we extended the conventional CNN-RNN to build Multi-CNN-Sequence to Sequence (MCNN-Seq) model, and formulate the correlation between the optic and SAR time series sequences. We verified the MCNN-Seq model and found that the accuracy of the predicted optical image was determined by crop types and phenological stages, both in the spatial and temporal domain, respectively. For several crops, such as onion, winter wheat, corn, and sugar beet, our predictions are fitting well with R2 0.9409, 0.9824,0.9157, and 0.9749, respectively. Compared to CNN and RNN, the simulation accuracy achieved by the MCNN-Seq model is much better in terms of R2 and RMSE. In general, results demonstrate that deep learning models have the potential to synergize SAR and optical data and provide replaceable information when the optical data has a long data gap due to the persistent clouds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call