Abstract

Reliable road detection is a key issue for modern Intelligent Vehicles, since it can help to identify the driv-able area as well as boosting other perception functions like object detection. However, real environments present several challenges like illumination changes and varying weather conditions. We propose a multi-modal road detection and segmentation method based on monocular images and HD multi-layer LIDAR data (3D point cloud). This algorithm consists of three stages: extraction of ground points from multilayer LIDAR, transformation of color camera information to an illumination-invariant representation, and lastly the segmentation of the road area. For the first module, the core function is to extract the ground points from LIDAR data. To this end a road boundary detection is performed based on histogram analysis, then a plane estimation using RANSAC, and a ground point extraction according to the point-to-plane distance. In the second module, an image representation of illumination-invariant features is computed simultaneously. Ground points are projected to image plane and then used to compute a road probability map using a Gaussian model. The combination of these modalities improves the robustness of the whole system and reduces the overall computational time, since the first two modules can be run in parallel. Quantitative experiments carried on the public KITTI dataset enhanced by road annotations confirmed the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call