Abstract

Road detection is an important task in autonomous navigation systems. In this paper, we propose a road detection framework induced by the inverse depth of LiDAR's point cloud. This framework is a fusion of a 3-D LiDAR and a monocular camera, where the 3-D point cloud of LiDAR is projected onto the camera's image frame, to exploit both range and color information. For the same road detection task, we propose an inverse-depth aware fully convolutional neural network based on image information and a line scanning strategy based on an inverse-depth histogram of LiDAR's point cloud. Finally, a conditional random field fusion method integrates the two road detection results. Our method is evaluated on KITTI-Road benchmark. Experiments demonstrate that our method achieves the state-of-the-art performance in road detection among all the referable methods that have ever reported their results on the KITTI-Road benchmark.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.