Abstract

A robotic grasp detection algorithm based on multiscale features is proposed for autonomous robotic grasping in an unstructured environment. The grasp detection model borrowed the YOLOv3 object detection algorithm and retained the original idea of multiscale detection to improve the perception ability of the grasp rectangle on different scales. Squeeze and excitation blocks were embedded into the Residual Networks (ResNet) structure of the original model, with deformable convolution (DC) introduced, so that the model attained stronger feature extraction ability to cope with more complex grasp detection tasks. Meanwhile, the prediction of the direction angle was transformed into a combination of classification and regression, achieving the prediction of the direction angle of the grabbing frame under different postures. The model was simulated on the Cornell grasp dataset. The results demonstrate that the algorithm in this study can effectively balance the accuracy and efficiency of detection and can migrate the prediction of the grasp rectangle to new objects. The results of online grasp experiments on a Baxter robot show that the average grasp success rate of 93% is achieved for 10 different objects, demonstrating the practical feasibility of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call