Abstract

Abstract. Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

Highlights

  • Advanced Driver Assistance Systems (ADAS) are nowadays a popular topic in research and development, aiming at increasing the safety of vehicles

  • Mostly self-driving cars are equipped with LiDAR sensors, it is expected that LiDAR will be a standard component of future ADAS systems, used for obstacle detection and environment sensing

  • Our approach for 3D feature point extraction from LiDAR data consists of five major steps: (i) generating range images, (ii) corner detection on range images, (iii) derivation of training examples, (iv) neural network training using back propagation and (v) prediction for the 3D feature points. 3.1 Generating Range Images Before generating range images from the LiDAR point cloud, we firstly removed the points on the ground, because these points usually have less distinctiveness in the 3D scene

Read more

Summary

Introduction

Advanced Driver Assistance Systems (ADAS) are nowadays a popular topic in research and development, aiming at increasing the safety of vehicles. Mostly self-driving cars are equipped with LiDAR sensors, it is expected that LiDAR will be a standard component of future ADAS systems, used for obstacle detection and environment sensing. The use of these sensors in ADAS will improve the localization of vehicles. In Brenner (2010), poles were extracted from the dense 3D point cloud measured by a mobile mapping LiDAR system. Using these extracted poles, a map of landmarks was generated as reference data, and stored in a GIS. The poles extracted from the vehicle data were matched with the reference

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call