Abstract

Aiming at the problem that FC-GQ-CNNs cannot classify the object of grasp detection, we propose a new method of classifiable grasp detection combining object detection and grasp detection based on FC-GQ-CNNs and YOLOv4. First, using FC-GQ-CNNs to detect various types of parts to obtain the highest quality grasp of the robot (grasp point 3D coordinates and grasp plane angle); secondly, using YOLOv4 trained on the parts dataset detect various types of parts to obtain the classification and positioning bounding boxes; thirdly, by adding Canny edge detection, Sklansky algorithm and other image processing methods, the positioning bounding box have been improved to the minimum bounding rectangle frame; finally, the left-ray method is used to match the grasp point 2D coordinates with the improved positioning bounding box - the minimum bounding rectangle frame, and the classified grasp detection result are obtained according to the minimum bounding rectangle frame that the grasp point coordinates fall into. Experimental results show that the proposed method can identify the classification of the object, and the improved positioning bounding box can solve the problem of classification errors caused by matching the grasp point coordinates to multiple positioning bounding boxes, and improve the accuracy of grasp detection classification rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call