Abstract

Currently, simultaneous localization and mapping (SLAM) is one of the main research topics in the robotics field. Visual-inertia SLAM, which consists of a camera and an inertial measurement unit (IMU), can significantly improve robustness and enable scale weak-visibility, whereas monocular visual SLAM is scale-invisible. For ground mobile robots, the introduction of a wheel speed sensor can solve the scale weak-visibility problem and improve robustness under abnormal conditions. In this paper, a multi-sensor fusion SLAM algorithm using monocular vision, inertia, and wheel speed measurements is proposed. The sensor measurements are combined in a tightly coupled manner, and a nonlinear optimization method is used to maximize the posterior probability to solve the optimal state estimation. Loop detection and back-end optimization are added to help reduce or even eliminate the cumulative error of the estimated poses, thus ensuring global consistency of the trajectory and map. The outstanding contribution of this paper is that the wheel odometer pre-integration algorithm, which combines the chassis speed and IMU angular speed, can avoid the repeated integration caused by linearization point changes during iterative optimization; state initialization based on the wheel odometer and IMU enables a quick and reliable calculation of the initial state values required by the state estimator in both stationary and moving states. Comparative experiments were conducted in room-scale scenes, building scale scenes, and visual loss scenarios. The results showed that the proposed algorithm is highly accurate—2.2 m of cumulative error after moving 812 m (0.28%, loopback optimization disabled)—robust, and has an effective localization capability even in the event of sensor loss, including visual loss. The accuracy and robustness of the proposed method are superior to those of monocular visual inertia SLAM and traditional wheel odometers.

Highlights

  • At present, the influential simultaneous localization and mapping (SLAM) systems with monocular vision include ORB-SLAM2 [1], LSD-SLAM [2], DSO [3], and SVO [4]

  • The multi-sensor fusion state estimator in this study uses monocular vision, inertial measurement unit (IMU), and wheel odometer measurements based on feature point optical flow tracking [16]

  • The high-performance PC upper computer is used for data processing of each sensor and the SLAM algorithm operation, and the embedded controller lower computer is used for the motion control of the mobile robot chassis

Read more

Summary

Introduction

The influential simultaneous localization and mapping (SLAM) systems with monocular vision include ORB-SLAM2 [1], LSD-SLAM [2], DSO [3], and SVO [4]. There are few studies on multisensor fusion SLAM for wheeled mobile robots based on vision, inertia, and wheel speed measurements that are tightly coupled and optimized. A multi-sensor fusion SLAM algorithm using monocular vision, inertial measurement, and wheel speed measurement is proposed. The multi-sensor fusion state estimator in this study uses monocular vision, IMU, and wheel odometer measurements based on feature point optical flow tracking [16]. None of these sensors can measure the absolute pose.

Visual Measurement Constraints
IMU Constraints
Residual Term
Wheeled Odometer Constraints
Two-Dimensional Wheel Odometer Algorithm
Wheel Odometer Pre-Integration
Marginalization and Prior Constraints
State Initialization
Gyro Zero Offset Initialization
Initialization of Key Frame Speed and Gravity
Experiments
Robustness Verification Experiment
Findings
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call