Abstract

Three-dimensional point clouds have been utilized and studied for the classification of objects at the environmental level. While most existing studies, such as those in the field of computer vision, have detected object type from the perspective of sensors, this study developed a specialized strategy for object classification using LiDAR data points on the surface of the object. We propose a method for generating a spherically stratified point projection (sP) feature image that can be applied to existing image-classification networks by performing pointwise classification based on a 3D point cloud using only LiDAR sensors data. The sP’s main engine performs image generation through spherical stratification, evidence collection, and channel integration. Spherical stratification categorizes neighboring points into three layers according to distance ranges. Evidence collection calculates the occupancy probability based on Bayes’ rule to project 3D points onto a two-dimensional surface corresponding to each stratified layer. Channel integration generates sP RGB images with three evidence values representing short, medium, and long distances. Finally, the sP images are used as a trainable source for classifying the points into predefined semantic labels. Experimental results indicated the effectiveness of the proposed sP in classifying feature images generated using the LeNet architecture.

Highlights

  • We propose a feature-image descriptor, which includes geometric information, based on only 3D points collected through a LiDAR sensor; The generated feature images include distribution information such as the location, distance, and density of surrounding points near a target point; The proposed feature-image-generation method is applicable to all 3D point clouds and enables pointwise classification through the popular image classifiers such as the convolutional neural networks (CNNs) model; The proposed sP2 method was validated through learning based on the feature-imagegeneration method, and image-classification networks are evaluated on the Kongju National University (KNU) and

  • This is a collection of 3D point clouds scanned by a LiDAR scanner and the dead-reckoning position measured by an inertial measurement unit (IMU) on a mobile robot platform

  • Considering the two datasets, we conclude that the density of point clouds obtained from the LiDAR sensors influences the accuracy of the sP2 method, achieving a higher accuracy on the KITTI dataset with high point density

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Range measurements from LiDAR sensors should be complemented by an effective method to generate input data and achieve high-performance object classification based on deep learning. Raw LiDAR data cannot be used directly in deep-learning algorithms that use visual images These problems can be solved by designing network architectures optimized for the sparse three-dimensional (3D) points provided by LiDAR sensors. The remainder of this paper is organized as follows: Section 2 describes the proposed sP2 method to capture feature images that can be used as the learning input data obtained from a 3D point cloud.

Spherically Stratified Point Projection
Image Descriptor
Occupancy Grid Update
Image Generation
3: Convert the Cartesian coordinates to linear distances
Experimental Evaluation
Datasets and Training Setup
Classification Performance
Method
Raw 3D Point Cloud Classification
Findings
Conclusions and Further Works
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.