Abstract

ABSTRACT Grasping object is one of the basic tasks of robots in many scenarios. The main challenge is how to generate grasping poses for unknown objects in cluttered scenes. This paper proposes a model-free 6-DOF grasp detection framework based on single-view local point clouds. The whole process includes three stages: Candidate Generation Network(CGN), Reliable Adjustment Module(RAM), and Quality Assessment Network(QAN). CGN predicts the graspability and the initial grasp pose of the sampled points by the features extracted from the local sphere region based on the improved Pointnet. In order to better learn local area point clouds, we propose a progressive local region data learning mechanism, which can extract features from small to large scales efficiently. Candidate grasps are then consisting of graspable points and their grasping poses. RAM adjusts the position and width of the generated candidate grasps by using reliable heuristic rules. QAN uses a simpled-pointnet to evaluate the quality of the grasp candidates and filters out grasps with high confidence to execute. The proposed method not only achieves state-of-the-art results on GraspNet-1Billion but also shows high grasping success rates in real cluttered scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.