Abstract

This paper presents a real-time 3D object detector based on LiDAR based Simultaneous Localization and Mapping (LiDAR-SLAM). The 3D point clouds acquired by mobile LiDAR systems, within the environment of buildings, are usually highly sparse, irregularly distributed, and often contain occlusion and structural ambiguity. Existing 3D object detection methods based on Convolutional Neural Networks (CNNs) rely heavily on both the stability of the 3D features and a large amount of labelling. A key challenge is efficient detection of 3D objects in point clouds of large-scale building environments without pre-training the 3D CNN model. To project image-based object detection results and LiDAR-SLAM results onto a 3D probability map, we combine visual and range information into a frustum-based probabilistic framework. As such, we solve the sparse and noise problem in LiDAR-SLAM data, in which any point cloud descriptor can hardly be applied. The 3D object detection results, obtained using both backpack LiDAR dataset and the well-known KITTI Vision Benchmark Suite, show that our method outperforms the state-of-the-art methods for object localization and bounding box estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.