Crop-type mapping is the foundation of grain security and digital agricultural management. Accuracy, efficiency and large-scale scene consistency are required to perform crop classification from remote sensing images. Many current remote-sensing crop extraction methods based on deep learning cannot account for adaptation effects in large-scale, complex scenes. Therefore, this study proposes a novel adaptive feature-fusion network for crop classification using single-temporal Sentinel-2 images. The selective patch module implemented in the network can adaptively integrate the features of different patch sizes to assess complex scenes better. TabNet was used simultaneously to extract spectral information from the center pixels of the patches. Multitask learning was used to supervise the extraction process to improve the weight of the spectral characteristics while mitigating the negative impact of a small sample size. In the network, superpixel optimization was applied to post-process the classification results to improve the crop edges. By conducting the crop classification of peanut, rice, and corn based on Sentinel-2 images in 2022 in Henan Province, China, the novel method proposed in this paper was more accurate, indicated by an F1 score of 96.53%, than other mainstream methods. This indicates our model’s potential for application in crop classification in large scenes.
Read full abstract