Abstract

Recently, 3D object detection based on multi-modal sensor fusion has been increasingly adopted in automated driving and robotics. For example, the semantic information provided by cameras and the geometric information provided by light detection and ranging (LiDAR) are fused to perceive 3D objects, as single modal sensors are unable to capture enough information from the environment. Many state-of-the-art methods fuse the signals sequentially for simplicity. By sequentially, we mean using the image semantic signals as auxiliary input for LiDAR-based object detectors would make the overall performance heavily rely on the semantic signals. Moreover, the error introduced by these signals may lead to detection errors. To remedy this dilemma, we propose an approach coined supervised-PointRendering to correct the potential errors in the image semantic segmentation results by training auxiliary tasks with fused features of the laser point geometry feature, the image semantic feature and a novel laser visibility feature. The laser visibility feature is obtained through the raycasting algorithm and is adopted to constrain the spatial distribution of fore- and background objects. Furthermore, we build an efficient anchor-free Single Stage Detector (SSD) powered by an advanced global-optimal label assignment to achieve a better time–accuracy balance. The new detection framework is evaluated on the extensively used KITTI and nuScenes datasets, manifesting the highest inference speed and at the same time outperforming most of the existing single-stage detectors with respect to the average precision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call