Abstract

The semantic segmentation of point clouds has significant applications in fields such as autonomous driving, robot vision, and smart cities. As LiDAR technology continues to develop, point clouds have gradually become the main type of 3D data. However, due to the disordered and scattered nature of point cloud data, it is challenging to effectively segment them semantically. A three-dimensional (3D) shape provides an important means of studying the spatial relationships between different objects and their structures in point clouds. Thus, this paper proposes a semi-supervised semantic segmentation network for point clouds based on 3D shape, which we call SBSNet. This network groups and encodes the geometric information of 3D objects to form shape features. It utilizes an attention mechanism and local information fusion to capture shape context information and calculate the data features. The experimental results showed that the proposed method achieved an overall intersection ratio of 85.3% in the ShapeNet dataset and 90.6% accuracy in the ModelNet40 dataset. Empirically, it showed strong performance on par or even better than state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call