Abstract

Point cloud segmentation is the key to the research of robotic grasping in multi-object stacked scenes. Aiming to improve the point cloud segmentation methods, which have low accuracy and poor robustness in complex stacked scenes, this paper proposes a point cloud segmentation method based on a dynamic graph convolutional neural network (DGCNN). A channel-spatial attention module combining the channel attention mechanism and the spatial attention mechanism is added to the DGCNN feature extraction. This improved method can effectively use the features of point clouds. To train the improved DGCNN network, we build a small rectangular object segmentation dataset containing 800 real-world stacked scenes. Experimental results show that our segmentation accuracy can reach 88.61%. After sufficient experiments, it is shown that the robotic grasping with our point cloud segmentation method can get 96% success rate, which can meet the actual production needs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call