Abstract

Existing neural network-based object detection approaches process LiDAR point clouds trained from one kind of LiDAR sensor. In the case of a different point cloud input, the trained network performs with less efficiency, especially when the given point cloud has low resolution. In this paper, we propose a new object detection approach, which is more resilient to variations in point cloud resolution. Firstly, layers from the point cloud are randomly discarded during the training phase in order to increase the variability of the data processed by the network. Secondly, the obstacles are described as Gaussian functions, grouping multiple parameters into a single representation. A Bhattacharyya distance is used as a loss function. This approach is tested on a LiDAR-based network and on an architecture using camera and LiDAR sensors. The networks are trained exclusively on the KITTI dataset and tested on Pandaset and the nuScenes Mini dataset. Experiments show that our method improves the performance of the tested networks on low-resolution point clouds without decreasing the ability to process high-resolution data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call