Computer vision-based precision weed control offers a promising avenue for reducing herbicide input and the associated costs of weed management. However, the substantial investments in time and labor required for the collection and annotation of weed image data pose challenges to develop effective deep learning models. The limitation also stems from the challenges in achieving accurate and reliable detection of weeds across varying growth stages, densities, and ecotypes in field scenarios. To address these issues, this research investigated a novel methodology employing a segmentation algorithm to accurately mark the contour information of crops in the image and detect weeds through image processing technology. Furthermore, a novel segmentation network was developed based on the YOLO architecture to address the substantial computing resource demands associated with segmentation algorithms. This was achieved through the design of a new backbone, incorporation of an attention mechanism, and modification of the feature fusion technique. The novel network achieved higher segmentation accuracy with less computational demands. The effectiveness of three different attention modules on segmentation tasks was additionally investigated. Experimental results showed that the insertion of Criss-cross Attention significantly improved the model's performance and was subsequently incorporated into our enhanced methodology. The enhanced model achieved a Mean Intersection over Union (mIoU50) of 90.9 %, with precision increasing by 5.9 % and Giga FLoating-point Operations Per Second (GFLOPs) reduced by 15.56 %, demonstrating enhanced suitability for deployment in resource-constrained computing environments. The findings presented in this study hold substantial theoretical and practical implications for precise weed management.
Read full abstract