When uncrewed agricultural machinery performs autonomous operations in the field, it inevitably encounters obstacles such as persons, livestock, poles, and stones. Therefore, accurate recognition of obstacles in the field environment is an essential function. To ensure the safety and enhance the operational efficiency of autonomous farming equipment, this study proposes an improved YOLOv8-based field obstacle detection model, leveraging depth information obtained from binocular cameras for precise obstacle localization. The improved model incorporates the Large Separable Kernel Attention (LSKA) module to enhance the extraction of field obstacle features. Additionally, the use of a Poly Kernel Inception (PKI) Block reduces model size while improving obstacle detection across various scales. An auxiliary detection head is also added to improve accuracy. Combining the improved model with binocular cameras allows for the detection of obstacles and their three-dimensional coordinates. Experimental results demonstrate that the improved model achieves a mean average precision (mAP) of 91.8%, representing a 3.4% improvement over the original model, while reducing floating-point operations to 7.9 G (Giga). The improved model exhibits significant advantages compared to other algorithms. In localization accuracy tests, the maximum average error and relative error in the 2–10 m range for the distance between the camera and five types of obstacles were 0.16 m and 2.26%. These findings confirm that the designed model meets the requirements for obstacle detection and localization in field environments.
Read full abstract