Abstract

A pivotal technology for autonomous robot grasping is efficient and accurate grasp pose detection, which enables robotic arms to grasp objects in cluttered environments without human intervention. However, most existing methods rely on PointNet or convolutional neural network as backbones for grasp pose prediction, which may lead to unnecessary computational overhead on invalid grasp points or background information. Consequently, performing efficient grasp pose detection for graspable points in complex scenes becomes a challenge. In this paper, we propose FastGNet, an end-to-end model that combines multiple attention mechanisms with the transformer architecture to generate 6-DOF grasp poses efficiently. Our approach involves a novel sparse point cloud voxelization technique, preserving the complete mapping between points and voxels while generating positional embeddings for the transformer network. By integrating unsupervised and supervised attention mechanisms into the grasp model, our method significantly improves the performance of focusing on graspable target points in complex scenes. The effectiveness of FastGNet is validated on the large-scale GraspNet-1Billion dataset. Our approach outperforms previous methods and achieves relatively fast inference times, highlighting its potential to advance autonomous robot grasping capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call