Abstract

In recent years, with the rapid development of 3D acquisition technology, point clouds are playing an increasingly important role in fields such as computer vision, autonomous driving and robotics. In the semantic segmentation task of 3D point clouds, most of the current point cloud segmentation networks tend to ignore the relationship between points when learning the local features of point clouds, which leads to inadequate extraction of local geometric features of point clouds. To solve this problem, this paper proposes a 3D point cloud segmentation network model DPGNN based on dynamic graph convolution. This model optimizes point cloud local features based on PointNet++ processing of point clouds. A dynamic graph convolution module is designed to replace the local feature extraction in PointNet++. The module can dynamically generate the local area graph structure of points and use a multilayer perceptron to extract the features of edges in the graph structure. In this paper, scene segmentation and part segmentation experiments are conducted on S3DIS and ShapeNet datasets, respectively. The overall accuracy in indoor scene segmentation reaches 88.27%, which is 6.01% better than the benchmark network PointNet++; the average class intersection ratio in part segmentation reaches 85.3%, which is 0.2% better than the benchmark network. The results show that the dynamic graph convolution module designed in this paper can effectively improve the accuracy of the model on point cloud segmentation, and the DPGNN network outperforms most of the current mainstream point cloud segmentation networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call