Abstract

Grasp pose detection generates the position and orientation of the robot end-effector to grasp objects from the RGB or RGB-D image. In this paper, we propose a novel grasp pose detection network that generates 3-DOF grasp poses using the RGB image. The network follows the anchor-based object detection pipeline and incorporates the angle detection unit. Furthermore, we redesign the grasp angle predictor with a classification unit to increase the accuracy of grasp pose rotation estimation. Our method classifies the prediction angle densely in contrast with the previous regression method or sparse classification method. Moreover, an angle smooth label is designed to avoid the sudden change of the angle regression loss caused by the periodic property of the angle. We validate our algorithm on Cornell Grasp Dataset and obtain a higher detection accuracy than the state-of-the-art method. The real scenario experiment also proves the effectiveness of our method. The robot equipped with the parallel gripper achieves a 96.4% grasp success rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call