Abstract

State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it’s the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.

Highlights

  • Estimating the state of mobile robotic is the basis to ensure their fundamental autonomous capability

  • For the proposed VINS-MKF, the state estimation problem is equivalent to the Maximum a posterior probability (MAP) of the given visual-inertial measurements [45], we sought to formulate a multi-keyframe tightly-coupled visual-inertial nonlinear optimization method, to gain better state estimation accuracy and reduce errors caused by sensor noise and modelling error

  • We presented the novel tightly-coupled multi-keyframe visual-inertial odometry algorithm VINS-MKF, which ensured accurate and robust state estimation for robots in an indoor environment

Read more

Summary

Introduction

Estimating the state of mobile robotic is the basis to ensure their fundamental autonomous capability. A tightly-coupled multi-keyframe visual-inertial odometry for accurate and robust state estimation, which is modified from the state-of-art keyframe based monocular ORBSLAM [26] and promoted to provide accurate and robust state estimation for mobile robots in challenging indoor environment. A nonlinear optimization method is formulated to further ensure the performance of state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. Three novel tips, including accurate initialization with a hardware synchronization mechanism and a self-calibration method, a MultiCol-IMU camera model, and an improved multi-keyframe double window structure, are coupled to the VINS-MKF to improve the performance of the state estimation. To the best of our knowledge, the proposed VINS-MKF is the first tightly-coupled multi-keyframe visual-inertial odometry based on monocular ORBSLAM, modified with multiple fisheye cameras alongside an inertial measurement unit (IMU).

Related Work
Multi-Keyframe Tightly-Coupled Visual-Inertial Nonlinear Optimization
MultiCol-IMU Camera Model and Structure
Camera Model for Single Camera
MultiCol-IMU Camera Model
IMU Pre-Integration
Derivation of the Proposed Nonlinear Optimization
The Measurements
Derivation of the Nonlinear Optimization
The Solution to the Nonlinear Optimization
IMU Error Term
Multi-Keyframe Tightly-Coupled Visual-Inertial Odometry
Visual Inertial Initialization
Synchronization
Sensor
Multi-Keyframe SFM
GPU based
Tracking
The Introduction of the Hyper Graph
Method
Different Initial MF Pose Prediction Method
The Different Co-Visibility Graph and the Motion-Only BA Optimization Method
Additional Criterion for Spawning a New MKF
Local Mapping
Improved Double Window Structure
Experiments
1: Impact
Experiment 1
Experiment 1 on Home-Made Datasets
Comparing
Experiment 1 on Corridor Environment
Experiment 2
Efficiency Evaluation
Comparison
Conclusion and Future
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call