Abstract

Light detection and ranging (LiDAR) has been widely used in autonomous vehicles for perception and localization. However, the cost of a high-resolution LiDAR is still prohibitively expensive, while its low-resolution counterpart is much more affordable. Therefore, using low-resolution LiDAR for autonomous driving is an economically viable solution, but the point cloud sparsity makes it extremely challenging. In this letter, we propose a two-stage neural network framework that enables 3-D object detection using a low-resolution LiDAR. Taking input from a low-resolution LiDAR point cloud and a monocular camera image, a depth completion network is employed to produce dense point cloud that is subsequently processed by a voxel-based network for 3-D object detection. Evaluated with KITTI dataset for 3-D object detection in bird-eye view (BEV), the experimental result shows that the proposed approach performs significantly better than directly applying the 16-line LiDAR point cloud for object detection. For both easy and moderate cases, our 3-D vehicle detection results are close to those using 64-line high-resolution LiDARs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call