Abstract

An accurate and computationally efficient SLAM algorithm is vital for autonomous vehicles. Most modern SLAM systems use feature detection approaches to limit computational requirements. Feature detection through a 3D point cloud can be a computationally challenging task. In this paper, we propose a feature-based SLAM algorithm using 2D image projections of the 3D laser point cloud. We use a camera parameters matrix to rasterize the 3D point cloud to an image. Then ORB feature detector is applied to these images. The proposed method gives repeatable and stable features in a variety of environments. Based on such features, we can estimate the 6dof pose of the robot. For loop detection, we employ a 2-step approach, i.e., nearest key-frame detection and loop candidate verification by matching features extracted from rasterized LIDAR images. We evaluate the proposed system with implementation on the KITTI dataset. Through experimental results, we show that the algorithm presented in this paper can substantially reduce the computational cost of feature detection from the point cloud and the whole SLAM system while giving accurate results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.