Abstract

We propose a data-driven method for simulating lidar sensors. The method reads computer-generated data, and (i) extracts geometrically simulated lidar point clouds and (ii) predicts the strength of the lidar response – <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">lidar intensities</i> . Qualitative evaluation of the proposed pipeline demonstrates the ability to predict systematic failures such as no/low responses on polished parts of car bodyworks and windows, or strong responses on reflective surfaces such as traffic signs and license/registration plates. We also experimentally show that enhancing the training set by such simulated data improves the segmentation accuracy on the real dataset with limited access to real data. Implementation of the resulting lidar simulator for the GTA V game, as well as the accompanying large dataset, is made publicly available.

Highlights

  • T HERE have been over 1.2 billion vehicles in use over the world in 2015.1 When a novel autonomous functionality, such as autonomous emergency braking, is to be put into operation, its reliability has to be thoroughly tested, because the impact on the accident rate is enormous

  • We propose to leverage other information about the object, such as its color and label description and study benefits of these modalities in prediction of lidar response learned from driving scenarios of the real world

  • Contributions of this paper are four-fold: (i) 1) We propose a way of modeling intensity from the lidar geometry, RGB images and class label

Read more

Summary

INTRODUCTION

T HERE have been over 1.2 billion vehicles in use over the world in 2015.1 When a novel autonomous functionality, such as autonomous emergency braking, is to be put into operation, its reliability has to be thoroughly tested, because the impact on the accident rate is enormous. Datasets alone do not provide options for validation of autonomous driving capabilities with respect to the interpreted scene These constraints point to the necessity of realistic and automatically annotated simulators. 3) We provide a publicly available lidar interface for the GTA V game, which allows for the automatic generation of synthetic annotated training and evaluation datasets. 4) We provide a large public GTA V dataset for object detection and semantic segmentation from RGB+lidar data, which consists of approximately 40 000 frames. Both source codes and dataset are available for download at https://github.com/vras-group/lidar-intensity

Large-Scale Lidar Datasets
Simulators With Lidar Point Cloud Properties
Simulation of Intensity
METHODS
Geometrical Simulation
Data-Driven Intensity Simulation
Learning of the Intensity-Predicting Network
EXPERIMENTS
Intensity Prediction Accuracy
Segmentation Accuracy Improvement
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.