Abstract

Light Detection and Ranging (LiDAR), which applies light in the formation of a pulsed laser to estimate the distance between the LiDAR sensor and objects, is an effective remote sensing technology. Many applications use LiDAR including autonomous vehicles, robotics, and virtual and augmented reality (VR/AR). The 3D point cloud classification is now a hot research topic with the evolution of LiDAR technology. This research aims to provide a high performance and compatible real-world data method for 3D point cloud classification. More specifically, we introduce a novel framework for 3D point cloud classification, namely, GSV-NET, which uses Gaussian Supervector and enhancing region representation. GSV-NET extracts and combines both global and regional features of the 3D point cloud to further enhance the information of the point cloud features for the 3D point cloud classification. Firstly, we input the Gaussian Supervector description into a 3D wide-inception convolution neural network (CNN) structure to define the global feature. Secondly, we convert the regions of the 3D point cloud into color representation and capture region features with a 2D wide-inception network. These extracted features are inputs of a 1D CNN architecture. We evaluate the proposed framework on the point cloud dataset: ModelNet and the LiDAR dataset: Sydney. The ModelNet dataset was developed by Princeton University (New Jersey, United States), while the Sydney dataset was created by the University of Sydney (Sydney, Australia). Based on our numerical results, our framework achieves more accuracy than the state-of-the-art approaches.

Highlights

  • Point clouds are obtained by Light Detection and Ranging (LiDAR) sensors, depth cameras, stereo cameras, etc., but are enhanced by additional sensors, such as multispectral, thermal, or color information [6,7,8]

  • We developed a new framework to handle the drawback of Multimodal Information Fusion Network (MIFN) and MVCNN methods mentioned in the related work section

  • Various deep learning-based methods have been proposed for classification tasks in autonomous driving

Read more

Summary

Introduction

Point clouds are obtained by LiDAR sensors, depth cameras, stereo cameras, etc., but are enhanced by additional sensors, such as multispectral, thermal, or color information [6,7,8]. RGB-D images can construct the point cloud, and every (x, y) coordinate will match with four properties (R, G, B, and depth). Point cloud classification can be applied to scene understanding such as robotics and autonomous driving [9,10,11]. Deep learning cannot apply directly to point cloud due to their irregular structure. Several researchers propose various deep learning-based solutions for point cloud classification, based on the view, the voxel, the raw-point cloud, and the graph. These approaches suffer from one or both of the following weaknesses: (1) missing test results with the real-world data that take from the LiDAR sensor, (2) having insufficiently good performance. We introduce a novel approach to overcome these above limitations

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call