Abstract

The fusion of visual and inertial odometry has matured greatly due to the complementarity of the two sensors. However, the use of high-quality sensors and powerful processors in some applications is difficult due to size and cost limitations, and there are also many challenges in terms of robustness of the algorithm and computational efficiency. In this work, we present VIO-Stereo, a stereo visual-inertial odometry (VIO), which jointly combines the measurements of the stereo cameras and an inexpensive inertial measurement unit (IMU). We use nonlinear optimization to integrate visual measurements with IMU readings in VIO tightly. To decrease the cost of computation, we use the FAST feature detector to improve its efficiency and track features by the KLT sparse optical flow algorithm. We also incorporate accelerometer bias into the measurement model and optimize it together with other variables. Additionally, we perform circular matching between the previous and current stereo image pairs in order to remove outliers in the stereo matching and feature tracking steps, thus reducing the mismatch of feature points and improving the robustness and accuracy of the system. Finally, this work contributes to the experimental comparison of monocular visual-inertial odometry and stereo visual-inertial odometry by evaluating our method using the public EuRoC dataset. Experimental results demonstrate that our method exhibits competitive performance with the most advanced techniques.

Highlights

  • In recent years, with the advancement of sparse nonlinear optimization theory, camera technology, and computing performance, Visual Simultaneous Localization And Mapping [VSLAM] technology has achieved tremendous development [1,2]

  • Many works based on nonlinear optimization have been reported, including SVO [3], LSD-SLAM [4], DSO [5], ORB-SLAM [6,7]

  • visual-inertial odometry (VIO) algorithms have been explored in other areas, such as initialization [26], online calibration [27], In this paper, we summarize our contributions as follows:

Read more

Summary

Introduction

With the advancement of sparse nonlinear optimization theory, camera technology, and computing performance, Visual Simultaneous Localization And Mapping [VSLAM] technology has achieved tremendous development [1,2]. Stereo matching was performed, which can of remove outliers in the stereo matching andaccurate feature and with visual measurements stereo cameras, and a highly tracking steps leading to reduction the mismatch of feature points and improvement of the robust visual-inertial odometry wasofachieved, which can run in real-time on devices such as robustness and accuracy of the system. IMU method was extensively and validated in comparison to state-of-themeasurements with visual measurements of stereo cameras, and a highly accurate and robust art open source VIO methods (including OKVIS [14], VINS-MONO [19] and S-MSCKF [16]) by visual-inertial odometry was achieved, which can run in real-time on devices such as drones. 6-degree-of-freedom pose with real scale, i.e., between the (Section 3), where features are extracted and tracked, and pre-integrate IMU measurements of the inertial camera (robot) motion.

Visual-Inertial Odometry Overview
Visual Processing Frontend
Visual-Inertial Initialization
Stereo Vision Initialization Based on a Sliding Window
Gyroscope Bias Estimation
Tightly Coupled Stereo Visual-Inertial Odometry
IMU and Visual Error Term
Marginalization
Results
Due space
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call