Abstract

The automatic classification of 3-D point clouds is publicly known as a challenging task in a complex road environment. Specifically, each point is automatically classified into a unique category label, and then, the labels are used as clues for semantic analysis and scene recognition. Instead of heuristically extracting handcrafted features in traditional methods to classify all points, we put forward an end-to-end octree-based fully convolutional network (FCN) to classify 3-D point clouds in an urban road environment. There are four contributions in this paper. The first is that the integration and comprehensive uses of OctNet and FCN greatly decrease the computing time and memory demands compared with a dense 3-D convolutional neural network (CNN). The second is that the octree-based network is strengthened by means of modifying the cross-entropy loss function to solve the problems of an unbalanced category distribution. The third is that an Inception-ResNet block is united with our network, which enables our 3-D CNN to effectively learn how to classify scenes containing objects at multiple scales and improve classification accuracy. The last is that an open source data set (HuangshiRoad data set) with ten different classes is introduced for 3-D point cloud classification. Three representative data sets [Semantic3D, WHU_MLS (blocks I and II), and HuangshiRoad] with different covered areas and numbers of points and classes are selected to evaluate our proposed method. The experimental results show that the overall classification accuracy is appreciable, with 89.4% for Semantic3D, 82.9% for WHU_MLS block I, 91.4% for WHU_MLS block II, and 94% for HuangshiRoad. Our deep learning approach can efficiently classify 3-D dense point clouds in an urban road environment measured by a mobile laser scanning (MLS) system or static LiDAR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call