Abstract

Outdoor scene understanding based on the results of point cloud classification plays an important role in mobile robots and autonomous vehicles equipped with a light detection and ranging (LiDAR) system. In this paper, a novel model named Panoramic Bearing Angle (PBA) images is proposed which is generated from 3D point clouds. In a PBA model, laser point clouds are projected onto the spherical surface to establish the correspondence relationship between the laser ranging point and the image pixels, and then we use the relative location relationship of the laser point in the 3D space to calculate the gray value of the corresponding pixel. To extract robust features from 3D laser point clouds, both image pyramid model and point cloud pyramid model are utilized to extract multiple-scale features from PBA images and original point clouds, respectively. A Random Forest classifier is used to accomplish feature screening on extracted high-dimensional features to obtain the initial classification results. Moreover, reclassification is carried out to correct the misclassification points by remapping the classification results into the PBA images and using superpixel segmentation, which makes full use of the contextual information between laser points. Within each superpixel block, the reclassification is carried out again based on the results of the initial classification results, so as to correct some misclassification points and improve the classification accuracy. Two datasets published by ETH Zurich and MINES ParisTech are used to test the classification performance, and the results show the precision and recall rate of the proposed algorithms.

Highlights

  • Outdoor scene understanding based on mobile laser scanning (MLS) point cloud data are a fundamental ability for unmanned vehicles and autonomous mobile robots navigating in urban environments

  • A 3D laser point cloud dataset published by MINES ParisTech is selected to verify the algorithm

  • This paper presents an approach of 3D laser point cloud classification to accomplish outdoor scene understanding in urban environments

Read more

Summary

Introduction

Outdoor scene understanding based on mobile laser scanning (MLS) point cloud data are a fundamental ability for unmanned vehicles and autonomous mobile robots navigating in urban environments. Sensors 2019, 19, 4546 reflectance image, it is used to solve the laser point cloud classification problem in outdoor or indoor scenes. Zhang et al studied the problem of 3D object detection in a cluttered indoor environment and transformed the 3D laser point cloud into a 2D BA image, which enabled the robot to complete the task of scene understanding at a lower computational cost [12]. A novel image model named PBA image is firstly proposed to represent the MLS point cloud data, which shows superior performance to display a large-scale scene with a panoramic view. To improve the accuracy and robustness of scene understanding results, multiple-scale features are extracted from the PBA images and from the corresponding original LiDAR point clouds. A series of experimental results from both ETH Zurich and MINES ParisTech datasets are given to test the validity and robustness of the proposed approach

Panoramic Bearing Angle Images Generating from 3D Laser Point Clouds
Calculating of Image Gray Value
Calculating of Image
Multi-Scale PBA Image Feature Extraction
Gold 127
Multi-Scale
Statistical Features
Morphological Features
Histogram Features
13. Classification
14. Initial
Classification
Tables and
TheGround classification evaluation metrics of Testing Set
Classification Results of 3D Point Clouds Obtained in On-the-Fy Scanning Mode
18. Point cloud classification results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call