Abstract

Machine vision-based precision weed management is a promising solution to substantially reduce herbicide input and weed control cost. The objective of this research was to compare two different deep learning-based approaches for detecting weeds in cabbage: (1) detecting weeds directly, and (2) detecting crops by generating the bounding boxes covering the crops and any green pixels outside the bounding boxes were deemed as weeds. The precision, recall, F1-score, mAP0.5, mAP0.5:0.95 of You Only Look Once (YOLO) v5 for detecting cabbage were 0.986, 0.979, 0.982, 0.995, and 0.851, respectively, while these metrics were 0.973, 0.985, 0.979, 0.993, and 0.906 for YOLOv8, respectively. However, none of these metrics exceeded 0.891 when detecting weeds. The reduced performances for directly detecting weeds could be attributed to the diverse weed species at varying densities and growth stages with different plant morphologies. A segmentation procedure demonstrated its effectiveness for extracting weeds outside the bounding boxes covering the crops, and thereby realizing effective indirect weed detection. The indirect weed detection approach demands less manpower as the need for constructing a large training dataset containing a variety of weed species is unnecessary. However, in a certain case, weeds are likely to remain undetected due to their growth in close proximity with crops and being situated within the predicted bounding boxes that encompass the crops. The models generated in this research can be used in conjunction with the machine vision subsystem of a smart sprayer or mechanical weeder. © 2024 Society of Chemical Industry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call