Abstract

Visual-inertial odometry (VIO) has recently received much attention for efficient and accurate ego-motion estimation of unmanned aerial vehicle systems (UAVs). Recent studies have shown that optimization-based algorithms achieve typically high accuracy when given enough amount of information, but occasionally suffer from divergence when solving highly non-linear problems. Further, their performance significantly depends on the accuracy of the initialization of inertial measurement unit (IMU) parameters. In this paper, we propose a novel VIO algorithm of estimating the motional state of UAVs with high accuracy. The main technical contributions are the fusion of visual information and pre-integrated inertial measurements in a joint optimization framework and the stable initialization of scale and gravity using relative pose constraints. To account for the ambiguity and uncertainty of VIO initialization, a local scale parameter is adopted in the online optimization. Quantitative comparisons with the state-of-the-art algorithms on the European Robotics Challenge (EuRoC) dataset verify the efficacy and accuracy of the proposed method.

Highlights

  • In robots and unmanned aerial vehicle systems (UAVs), the ego-motion estimation is essential.To estimate the current pose of a robot, various sensors such as GPS, inertial measurement units (IMU), wheel odometers, and cameras have been used

  • We propose a novel visual-inertial odometry algorithm using non-linear optimization of tightlycoupled visual and pre-integrated IMU observations with a local scale variable

  • The experimental results show that the proposed method achieves higher accuracy than the state-of-the-art visual-inertial odometry (VIO) algorithms on the well-known European Robotics Challenge (EuRoC) benchmark dataset

Read more

Summary

Introduction

In robots and unmanned aerial vehicle systems (UAVs), the ego-motion estimation is essential. We propose a VIO system that uses the tightly-coupled optimization framework of the visual and pre-integrated inertial observation, together with a robust initialization method for the scale and gravity. By using the pre-integrated IMU poses as the inertial costs, the number of pose parameters in the optimization window is drastically decreased, roughly from the number of frames to the number of keyframes This reduction enables us to increase the size of the optimization window, which results in improved accuracy and robustness of the system. We propose a novel visual-inertial odometry algorithm using non-linear optimization of tightlycoupled visual and pre-integrated IMU observations with a local scale variable. By enforcing the relative pose constraints between keyframes acquired from visual observations, the initial scale and gravity vectors can be estimated reliably, without assuming any bootstrapping motion patterns or that the bias parameters are given. The experimental results show that the proposed method achieves higher accuracy than the state-of-the-art VIO algorithms on the well-known EuRoC benchmark dataset

Related Work
System Overview
Visual Inertial Optimization
Visual Reprojection Error
IMU Pre-Integration
Online Optimization
Marginalization
Bootstrapping
Vision-Only Map Building
Pose Graph Optimization with IMU Pre-Integration
Convergence Check
IMU Biases Update
Experiments
Comparison with the State-of-the-Art Algorithms
Bootstrapping Experiments
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.