In response to the poor detection performance of grapes in orchards caused by issues such as leaf occlusion and fruit overlap, this study proposes an improved grape detection method named YOLOv7-MCSF based on the You Only Look Once v7 (YOLOv7) framework. Firstly, the original backbone network is replaced with MobileOne to achieve a lightweight improvement of the model, thereby reducing the number of parameters. In addition, a Channel Attention (CA) module was added to the neck network to reduce interference from the orchard background and to accelerate the inference speed. Secondly, the SPPFCSPC pyramid pooling is embedded to enhance the speed of image feature fusion while maintaining a consistent receptive field. Finally, the Focal-EIoU loss function is employed to optimize the regression prediction boxes, accelerating their convergence and improving regression accuracy. The experimental results indicate that, compared to the original YOLOv7 model, the YOLOv7-MCSF model achieves a 26.9% reduction in weight, an increase in frame rate of 21.57 f/s, and improvements in precision, recall, and mAP of 2.4%, 1.8%, and 3.5%, respectively. The improved model can efficiently and in real-time identify grape clusters, providing technical support for the deployment of mobile devices and embedded grape detection systems in orchard environments.
Read full abstract