Robotically selective broccoli harvesting requires precise lightweight detection models to efficiently detect broccoli heads. Therefore, this study introduces a lightweight and high-precision detection model named LBDC-YOLO (Lightweight Broccoli Detection in Complex Environment—You Look Only Once), based on the improved YOLOv8 (You Look Only Once, Version 8). The model incorporates the Slim-neck design paradigm based on GSConv to reduce computational complexity. Furthermore, Triplet Attention is integrated into the backbone network to capture cross-dimensional interactions between spatial and channel dimensions, enhancing the model’s feature extraction capability under multiple interfering factors. The original neck network structure is replaced with a BiFPN (Bidirectional Feature Pyramid Network), optimizing the cross-layer connection structure, and employing weighted fusion methods for better integration of multi-scale features. The model undergoes training and testing on a dataset constructed in real field conditions, featuring broccoli images under various influencing factors. Experimental results demonstrate that LBDC-YOLO achieves an average detection accuracy of 94.44% for broccoli. Compared to the original YOLOv8n, LBDC-YOLO achieves a 32.1% reduction in computational complexity, a 47.8% decrease in parameters, a 44.4% reduction in model size, and a 0.47 percentage point accuracy improvement. When compared to models such as YOLOv5n, YOLOv5s, and YOLOv7-tiny, LBDC-YOLO exhibits higher detection accuracy and lower computational complexity, presenting clear advantages for broccoli detection tasks in complex field environments. The results of this study provide an accurate and lightweight method for the detection of broccoli heads in complex field environments. This work aims to inspire further research in precision agriculture and to advance knowledge in model-assisted agricultural practices.
Read full abstract