Abstract

The development and maturation of simultaneous localization and mapping (SLAM) in robotics opens the door to the application of a visual inertial odometry (VIO) to the robot navigation system. For a patrol robot with no available Global Positioning System (GPS) support, the embedded VIO components, which are generally composed of an Inertial Measurement Unit (IMU) and a camera, fuse the inertial recursion with SLAM calculation tasks, and enable the robot to estimate its location within a map. The highlights of the optimized VIO design lie in the simplified VIO initialization strategy as well as the fused point and line feature-matching based method for efficient pose estimates in the front-end. With a tightly-coupled VIO anatomy, the system state is explicitly expressed in a vector and further estimated by the state estimator. The consequent problems associated with the data association, state optimization, sliding window and timestamp alignment in the back-end are discussed in detail. The dataset tests and real substation scene tests are conducted, and the experimental results indicate that the proposed VIO can realize the accurate pose estimation with a favorable initializing efficiency and eminent map representations as expected in concerned environments. The proposed VIO design can therefore be recognized as a preferred tool reference for a class of visual and inertial SLAM application domains preceded by no external location reference support hypothesis.

Highlights

  • When robots operate under an unknown environment, an absolute external location reference such as a Global Positioning System (GPS) may be not available, and the no-prior-knowledge based navigating technology will be highly required

  • The camera is mounted on the stationary base of the robot, providing the visual inertial odometry (VIO) system with sequential image information, by which it estimates the robot pose in the world coordinate frame and which can be further applied to represent and address the structure from motion (SFM) problem [23,24]

  • The essential part of integrating these two components consists in updating the state variables of the tightly-coupled VIO system as time evolves, so as to efficiently obtain the global optimum solutions of the state variables

Read more

Summary

Introduction

When robots operate under an unknown environment, an absolute external location reference such as a Global Positioning System (GPS) may be not available, and the no-prior-knowledge based navigating technology will be highly required. To guarantee the long-term and steady availability in cases where limited numbers of feature points or textures are present, some research has been developed to improve the feature extraction pattern by fusing the line features or plane features in the VIO front-end, enabling the cameras to efficiently keep tracking These solutions are equivalent to exerting some additional constraints to the entire pose estimation tasks [19,20]. The introduction of this practical optimization model improves the efficiencies of the state estimation and mapping Both dataset tests and substation scene tests for the robot routing inspection applications have been conducted, and the detailed evaluation results are given.

Overall Description of Tightly-Coupled VIO
VIO Anatomy
Reprojection Error of the Camera described
IMU Pre-Integration
Direct Method
VIO Initialization Design
Gyro Bias Estimation
Accelerometer Bias and Gravity Estimation
Scale Factor and Velocity Estimation
Tightly-Coupled Information Fusion Based on Sliding Window
Sliding Window Model
Visual Measurement Model
Experimental Section
Dataset Tests and Analyses
VIO Initialization Results
Navigation Performance Evaluations
In Figure
10. Absolute
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call