Abstract

Crop field boundary extraction is crucial to remote sensing images attained to support agricultural production and planning. In recent years, deep convolutional neural networks (CNNs) have gained significant attention for edge detection tasks. Moreover, transformers have shown superior feature extraction and classification capabilities compared to CNNs due to their self-attention mechanism. We proposed a novel structure that combines full edge extraction with CNNs and enhances connectivity with transformers, consisting of three stages: a) preprocessing the training data; b) training the semantic edge and spatial structure graph models; and c) vectorizing the fusion of semantic edge and spatial structure graph outputs. To cater specifically to high-resolution remote sensing image crop-field boundary extraction, we developed a CNN model called Densification D-LinkNet. Its full-scale skip connections and edge-guided module adapted well to different crop-field boundary features. Additionally, we employed a spatial graph structure generator (Relationformer) based on object detection that directly outputs the structural graph of the crop field boundary. This method relies on good connectivity to repair fragmented edges that may appear in semantic edge detection. Through multiple experiments and comparisons with other edge-detection methods, such as BDCN, DexiNed, PidiNet, and EDTER, we demonstrated that our proposed method can achieve at least 9.77% improvement in boundary intersection over union (IoU) and 2.07% improvement in polygon IoU on two customized datasets. These results indicate the effectiveness and robustness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call