Abstract

Point cloud is a set of points in 3D space, typically produced by a 3D scanner to capture the 3D representation of a scene. Semantic segmentation of 3D point cloud data where each point is assigned with a semantic class such as building, road, water and so on, has recently gained tremendous attention from data mining researchers and industrial practitioners. Accurate 3D-segmentation results can be used for constructing 3D scene for robotic navigation and assessing the city expansion. Dealing with point cloud data poses a huge challenge of irregular format as points are distributed irregularly unlike 2D pixel of an image or 3D voxel of a 3D model. A number of deep learning architectures have been proposed to model 3D point cloud to perform semantic segmentation. In this paper, we present a new case study of applying three novel deep learning architectures, PointNet, PointCNN and SPGraph, to an outdoor aerial survey point cloud dataset, whose features include intensity and spectral information (RGB). We then compare the results of 3D semantic segmentation from such networks in term of overall accuracy. The result shows that PointNet, PointCNN, and SPGraph achieve 83%, 72.7%, and 83.4% overall accuracy of semantic segmentation, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call