Abstract

For the operation and visual positioning of a banana robot, it is important to accurately position the rachis and cut off point. However, the main factors that affect the three-dimensional (3D) positioning accuracy of the point of cutting off male flower clusters are the image detection algorithm and positioning errors in the depth direction. In this study, a new YOLOv5-B model was constructed using the InvolutionBottleneck module in the network structures of YOLOv5 and improving the loss function to improve the accuracy and speed of small target detection. Then, the contour of the rachis is segmented using an edge detection algorithm, and the optimal cut off point is obtained as a scalar. Finally, a robot experiment platform with stereo vision for cutting off male flower clusters of bananas is built to obtain the 3D space coordinates of the cutoff point of the rachis through visual inspection and analyze positioning errors in the depth direction. The experiments show that the overall mean average position (mAP) of the YOLOv5-B model for multi-target recognition of bananas is 93.2%, which is higher than that of the YOLOv5 and PP-YOLOv2 models. Among these, the small-target detection accuracy of the proposed model is better than that of the others. Additionally, the YOLOv5-B model shows excellent speed, with an average image processing time of only 0.009 s/piece. An accurate relation model for the geometric measurement between the depth camera and laser measurement was established, and positioning errors were analyzed. The median error (MEDE) of the depth coordinates was 8 mm and the median absolute deviation (MAD) was 2 mm. If the depth error is within the control range of the compensation, the requirements of rachis segmentation by robots can be met.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call