Autonomous agricultural machinery path planning requires high-precision field boundary information. To address the challenge of rapidly acquiring accurate information about different types of land objects in complex field scenarios, this study introduces an interactive segmentation-based method and software for agricultural field boundary perception, specifically designed for high-resolution orthophotos. The method aims to accurately delineate various agricultural objects in the image, such as crops, soil, roads, edges, exits to fields, and obstacles. Compared to previous research on agricultural object detection and segmentation, this study proposes an interactive deep image segmentation model for perceiving multiple types of agricultural features. During the image segmentation process, manually adding positive and negative points provides supervised information for the segmentation of agricultural images. In addition, this research uses the PaddlePaddle deep learning framework to implement the proposed method and extends the open-source software EISeg to develop a dedicated tool for agricultural image segmentation. Through 3 to 4 interactive iterations, the method achieves an impressive mean Intersection over Union (mIoU) segmentation accuracy of about 90%. The model's average inference time on the training server was 0.197 s, meeting the real-time requirements of the interactive segmentation method. By accurately segmenting agricultural land features from high-resolution orthoimagery, the proposed method can provide valuable support for the construction of high-precision navigation maps for autonomous agricultural machinery.
Read full abstract