Abstract
As crop yield stagnation, climate change, and the rising demand for agricultural products pose increasing challenges, mapping crop systems is becoming more and more important. Winter wheat is one of the major cereal crops cultivated in China, ranking as the third largest crop in terms of production and harvested area. Accurately mapping winter wheat is necessary for implementing effective farm management practices. While many studies have successfully produced high spatiotemporal resolution land cover maps, relatively few map products of crop types are available in China. The growing archive of satellite image time series provides enormous opportunities to map crops more closely. This research presents a two-step method to map winter wheat based on Sentinel-1 and Sentinel-2 time-series data from Shandong Province using the deep learning approaches. The winter crops were firstly mapped using time-series optical vegetation indices employing the deep learning methods. Then winter wheat was extracted from the winter crops mask by coupling optical and synthetic aperture radar time-series images. The results indicated that the precision of mapping winter wheat using Temporal Convolution Neural Networks (TempCNN) achieved the highest precision in mapping winter wheat, with an overall accuracy of 93.7 %, a kappa coefficient of 0.907, and an F1-score of 0.989. This was followed sequentially by the Residual 1D convolutional neural networks (ResNet), the Multi-Layer Perceptron (MLP), and the Lightweight Temporal Self-Attention Encoder (L-TAE). The Temporal Attention Encoder (TAE) model demonstrated the lowest precision among the compared models. The results agree well with independent county-level official census winter wheat area data (R2 = 0.936). The proposed framework can also be applied in other regions to generate maps of different crops, so future work can extend the proposed model to other agricultural regions, where an increased number of crop types and natural vegetation types can be included and tested.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have