Abstract

Abstract. Mapping target crops before the harvest season for regions lacking crop-specific ground truth is critical for global food security. Utilizing multispectral remote sensing and domain adaptation methods, prior studies strive to produce precise crop maps in these regions (target domain) with the help of the crop-specific labelled remote sensing data from the source regions (source domain). However, existing approaches assume identical label spaces across those domains, a challenge often unmet in reality, necessitating a more adaptable solution. This paper introduces the Multiple Crop Mapping Generative Adversarial Neural Network (MultiCropGAN) model, comprising a generator, discriminator, and classifier. The generator transforms target domain data into the source domain, employing identity losses to retain the characteristics of the target data. The discriminator aims to distinguish them and shares the structure and weights with the classifier, which locates crops in the target domain using the generator’s output. This model’s novel capability lies in locating target crops within the target domain, overcoming differences in crop type label spaces between the target and source domains. In experiments, MultiCropGAN is benchmarked against various baseline methods. Notably, when facing differing label spaces, MultiCropGAN significantly outperforms other baseline methods. The Overall Accuracy is improved by about 10%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call