Abstract

The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost LiDAR, error accumulates rapidly while SLAM, and it may cause a huge error for building a larger map. To cope with this problem, this paper proposes a new graph optimization-based SLAM framework through the combination of low-cost LiDAR sensor and vision sensor. In the SLAM framework, a new cost-function considering both scan and image data is proposed, and the Bag of Words (BoW) model with visual features is applied for loop close detection. A 2.5D map presenting both obstacles and vision features is also proposed, as well as a fast relocation method with the map. Experiments were taken on a service robot equipped with a 360° low-cost LiDAR and a front-view RGB-D camera in the real indoor scene. The results show that the proposed method has better performance than using LiDAR or camera only, while the relocation speed with our 2.5D map is much faster than with traditional grid map.

Highlights

  • Localization and navigation are the key technologies of autonomous mobile service robots, and simultaneous localization and mapping (SLAM) is considered as an essential basis for this.The main principle of SLAM is to detect the surrounding environment through sensors on the robot, and to construct the map of the environment while estimating the pose of the robot

  • The results show that the proposed method has better performance than using light detection and ranging (LiDAR) or camera only, while the relocation speed with our 2.5D map is much faster than with traditional grid map

  • SLAM with monocular camera and laser is introduced, with the assumption that the wall is normal to the ground and vertically flat. [23] integrates different state-of-the art SLAM methods based on vision, laser and inertial measurements using an Extended Kalman filter (EKF) for Unmanned Aerial Vehicles (UAV) in indoor. [24] presents a localization method based in cooperation between aerial and ground robots in an indoor environment, a 2.5D elevation map is built by RGB-D sensor and 2D LiDAR attached on UAV. [25] provides a scale estimation and drift correction method by combining mono laser range finder and camera for mono-SLAM

Read more

Summary

Introduction

Localization and navigation are the key technologies of autonomous mobile service robots, and simultaneous localization and mapping (SLAM) is considered as an essential basis for this. The main principle of SLAM is to detect the surrounding environment through sensors on the robot, and to construct the map of the environment while estimating the pose (including both location and orientation) of the robot. Since SLAM was first put forward in 1988, it was growing very fast, and many different schemes have been formed. Depending on the main sensors applied, there are two mainstream practical approaches: LiDAR (light detection and Ranging)-SLAM and Visual-SLAM

LiDAR-SLAM
Visual-SLAM
Multi-Sensor Fusion
Problems in Application
The Contributions of this Paper
The SLAM Framework of Low-Cost LiDAR and Vision Fusion
United Error Function
Pose Graph Optimization
Loop Detection
Traditional Grid Map and Feature Map
Experiment
Experimental
Experiment of Building the Map
Figure
Comparison
Experiment of of Relocation
Findings
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call