Abstract
Robotic grasping is one of the key functions for realizing industrial automation and human-machine interaction. However, current robotic grasping methods for unknown objects mainly focus on generating the 6D grasp poses, which cannot obtain rich object pose information and are not robust in challenging scenes. Based on this, we propose a robotic continuous grasping system that achieves end-to-end robotic grasping of intra-class unknown objects in 3D space by accurate category-level 6D object pose estimation. Specifically, to achieve object pose estimation, first, we propose a global shape extraction network (GSENet) based on ResNet1D to extract the global shape of an object category from the 3D models of intra-class known objects. Then, with the global shape as the prior feature, we propose a transformer-guided network to reconstruct the shape of intra-class unknown object. The proposed network can effectively introduce internal and mutual communication between the prior feature, current feature, and their difference feature. The internal communication is performed by self-attention. The mutual communication is performed by cross-attention to strengthen their correlation. To achieve robotic grasping for multiple objects, we propose a low-computation and effective grasping strategy based on the pre-defined vector orientation, and develop a GUI for monitoring and control. Experiments on two benchmark datasets demonstrate that our system achieves state-of-the-art (SOTA) 6D pose estimation accuracy. Moreover, the real-world experiments show that our system also achieves superior robotic grasping performance, with a grasping success rate of 81.6 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\%$</tex-math></inline-formula> for multiple objects. Code and trained models are released at https://github.com/CNJianLiu/6D-CLGrasp
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.