Abstract

This article aims at demonstrating the feasibility of modern deep learning techniques for the real-time detection of non-stationary objects in point clouds obtained from 3-D light detecting and ranging (LiDAR) sensors. The motion segmentation task is considered in the application context of automotive Simultaneous Localization and Mapping (SLAM), where we often need to distinguish between the static parts of the environment with respect to which we localize the vehicle, and non-stationary objects that should not be included in the map for localization. Non-stationary objects do not provide repeatable readouts, because they can be in motion, like vehicles and pedestrians, or because they do not have a rigid, stable surface, like trees and lawns. The proposed approach exploits images synthesized from the received intensity data yielded by the modern LiDARs along with the usual range measurements. We demonstrate that non-stationary objects can be detected using neural network models trained with 2-D grayscale images in the supervised or unsupervised training process. This concept makes it possible to alleviate the lack of large datasets of 3-D laser scans with point-wise annotations for non-stationary objects. The point clouds are filtered using the corresponding intensity images with labeled pixels. Finally, we demonstrate that the detection of non-stationary objects using our approach improves the localization results and map consistency in a laser-based SLAM system.

Highlights

  • The Global Positioning System (GPS) is commonly used for outdoor localization there are still many scenarios in which autonomous vehicles, such as self-driving cars, have to localize themselves using their exteroceptive sensors exclusively

  • We use PlaneLOAM to quantitatively demonstrate the gains due to the proposed approach to motion segmentation, but we show that this approach can be implemented with the open-source LOAM system, improving the results

  • Knowing the negative influence the points measured on non-stationary objects have on the map created by light detecting and ranging (LiDAR)-based Simultaneous Localization and Mapping (SLAM) systems, and how these points introduce invalid features in the feature-based map of PlaneLOAM, we examine quantitatively whether our approach makes it possible to counteract these problems at the scale of the entire trajectory, improving the results

Read more

Summary

Introduction

The Global Positioning System (GPS) is commonly used for outdoor localization there are still many scenarios in which autonomous vehicles, such as self-driving cars, have to localize themselves using their exteroceptive sensors exclusively. Such situations are typical in urban driving scenarios, due to tunnels, underground parking lots, and tall buildings. Most of the LiDAR-based localization methods register consecutive observations using a variant of the well-known Iterative Closest Points (ICP) algorithm [3], assuming that the observed scene is static and rigid This approach is useful whenever we need to keep track of the dynamics of some objects populating the scene, for example, for the purpose of maneuver planning, but has a high computational cost

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call