Abstract

Road Detection is a basic task in automated driving field, in which 3D lidar data is commonly used recently. In this paper, we propose to rearrange 3D lidar data into a new organized form to construct direct spatial relationship among point cloud, and put forward new features for real-time road detection tasks. Our model works based on two prerequisites: (1) Road regions are always flatter than non-road regions. (2) Light travels in straight lines in a uniform medium. Based on prerequisite 1, we put forward difference-between-lines feature, while ScanID density and obstacle radial map are generated based on prerequisite 2. According to our method, we construct an array of structures to store and reorganize 3D input firstly. Then, two novel features, difference-between-lines and ScanID density, are extracted, based on which we construct a consistency map and an obstacle map in Bird Eye View (BEV). Finally, the road region is extracted by fusing these two maps and refinement is used to polish up our outcome. We have carried out experiments on the public KITTI-Road benchmark, achieving one of the best performances among the lidar-based road detection methods. To further prove the efficiency of our method on unstructured road, the visual outcomes in rural areas are also proposed.

Highlights

  • Traversable road detection is always a core task in the context of autonomous driving vehicles, which has been studied for decades

  • We evaluate the proposed method with 5 well-performed models demonstrated on KITTI dataset [34], including Road Estimation with Sparse 3D Points From Velodyne (RES3D-Velo) [31], Graph Based Road Estimation using Sparse 3D Points from Velodyne (GRES3D+VELO) [35], CRF based Road Detection with Multi-Sensor Fusion (FusedCRF) [36], LidarHistogram (LidarHisto) [33], Hybrid Conditional Random Field (HybridCRF) [37]

  • To express our method, we name it RDR, which indicates a Road Detection Method based on Reorganized Lidar Data

Read more

Summary

Introduction

Traversable road detection is always a core task in the context of autonomous driving vehicles, which has been studied for decades. Bunches of implementations are proposed based on various sensors, in which vision-based road detection is the most conventional kind. The lack of depth information makes environmental perception inadequate to construct a perfect road model. A consensus has been reached that range data is necessary in road detection, which leads to researches on utilization of 3D information recently. Range data is usually provided by stereo-cameras, radars or lidars. In off-road environment, which is much more challenging compared to well-structured urban scenes, 3D lidar scanners are more widely needed

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call