Abstract

Point cloud semantic segmentation is a challenging task in 3D understanding due to its disorder, unstructured and nonuniform density. Currently, most methods focus on network design and feature extraction. However, it is difficult to capture the point cloud features of complex objects comprehensively and accurately. In this paper, we propose a multiscale hierarchical network (MHNet) for 3D point cloud semantic segmentation. First, a hierarchical point cloud feature extraction structure is constructed to learn multiscale local region features. Then, these local features are subjected to feature propagation to obtain the features of the entire point set for pointwise label prediction. To take fully advantage of the correlations of propagated information between the different scale coarse layers and the original points, the local features of each scale are characterized by feature propagation to obtain the features of the original point clouds at the corresponding scale. The global features propagated from different scales are integrated to constitute the final features of the input point clouds. The concatenated multiscale hierarchical features, including both local features and global features, can better predict the segmentation probability of each point cloud. Finally, the predicted segmentation results are optimized using the conditional random filed (CRF) with a spatial consistency constraint. The efficiency of MHNet is evaluated on two 3D datasets (S3DIS and ScanNet), and the results show performance comparable or superior to the state-of-the-art on both datasets.

Highlights

  • Point cloud semantic segmentation plays a critical role in autonomous driving, robot navigation, augmented reality and 3D reconstruction

  • We propose a Multiscale Hierarchical Network (MHNet) for 3D point cloud semantic segmentation

  • Where TPi is the number of true positives, Ti is the number of ground truth positive points, Pi is the number of predicted positives, N is the number of classes

Read more

Summary

Introduction

Point cloud semantic segmentation plays a critical role in autonomous driving, robot navigation, augmented reality and 3D reconstruction. The unordered and unstructured properties of 3D point clouds make it difficult to be presented as 2D images. It is impossible to directly apply the existing image segmentation framework to the point clouds. Its large scale and nonuniform density present numerous challenges in 3D point cloud understanding. Previous solutions mainly transform 3D point clouds into 2D images [1]–[4] and regular voxel grids [5]–[7]. Converting point clouds to 2D formats results in the loss of information. Voxelization assigns the points in the same voxel with the same semantic label, which tends to

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.