AbstractThe article proposes a real‐time apple picking method based on an improved YOLOv5. This method accurately recognizes different apple targets on fruit trees for robots and helps them adjust their position to avoid obstructions during fruit picking. Firstly, the original BottleneckCSP module in the YOLOv5 backbone network is enhanced to extract deeper features from images while maintaining lightweight. Secondly, the ECA module is embedded into the improved backbone network to better extract features of different apple targets. Lastly, the initial anchor frame size of the network is adjusted to avoid recognizing apples in distant planting rows. The results demonstrate that this improved model achieves high accuracy rates and recall rates for recognizing various types of apple picking methods with an average recognition time of 0.025s per image. Compared with other models tested on six types of apple picking methods, including the original YOLOv5 model as well as YOLOv3 and EfficientDet‐D0 algorithms, our improved model shows significant improvements in mAP by 1.95%, 17.6%, and 12.7% respectively. This method provides technical support for robots' picking hands to actively avoid obstructions caused by branches during fruit harvesting, effectively reducing apple loss.
Read full abstract