Abstract

Accurate extraction of winter wheat area is essential for wheat yield estimation. Remotely sensed images have limited coverage, are taken at different times, from different angles and in different geographical areas, and the spectral information of the same feature varies from image to image. Machine learning methods are very popular in extracting winter wheat planting area, there are differences between different images, namely the distribution differences between source domain data and target domain data, the results obtained by these methods directly applied to other areas are not satisfactory. To achieve cross regional extraction of winter wheat area, we propose generative adversarial cross-domain networks for image classification under the same image type. The cross-domain network proposed comprises a generative network and a feature extractor. The generative network creates diverse and acceptable samples from the initial input data, constrained by the contrast loss. The feature extractor converts the initial input data and the generated data into a high-level representation. Two time series datasets were constructed and a series of experiments were conducted on these datasets, obtaining high-precision and optimal classification results. Sample training was performed using 70%, 50%, 30% and 10% of the data on both datasets. The study results demonstrated the high performance and cross-domain capability of our network. Furthermore, the winter wheat cultivation area of the entire Zhoukou city was extracted, yielding results with an accuracy of 94.23%. Extensive experiments and applications on the dataset demonstrate that our method is feasible and reliable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call