Abstract

To achieve a high precision estimation of indoor robot motion, a tightly coupled RGB-D visual-inertial SLAM system is proposed herein based on multiple features. Most of the traditional visual SLAM methods only rely on points for feature matching and they often underperform in low textured scenes. Besides point features, line segments can also provide geometrical structure information of the environment. This paper utilized both points and lines in low-textured scenes to increase the robustness of RGB-D SLAM system. In addition, we implemented a fast initialization process based on the RGB-D camera to improve the real-time performance of the proposed system and designed a new backend nonlinear optimization framework. By minimizing the cost function formed by the pre-integrated IMU residuals and re-projection errors of points and lines in sliding windows, the state vector is optimized. The experiments evaluated on public datasets show that our system achieves higher accuracy and robustness on trajectories and in pose estimation compared with several state-of-the-art visual SLAM systems.

Highlights

  • In recent years, simultaneous localization and mapping (SLAM) [1] has become an attractive research topic in many self-localization robotic areas, with the development of mobile robots

  • A visual-inertial odometry (VIO) that combines the information of cameras and IMU has improved the accuracy and robustness for the localization of indoor robots

  • First, with a set of corresponding 3D points features, the iterative closest point (ICP) algorithm was used to recover the initial pose in the process of structure from motion (SFM)

Read more

Summary

Introduction

Simultaneous localization and mapping (SLAM) [1] has become an attractive research topic in many self-localization robotic areas, with the development of mobile robots. A visual-inertial odometry (VIO) that combines the information of cameras and IMU has improved the accuracy and robustness for the localization of indoor robots. Stefan et al [8] proposed an open keyframe visual-inertial SLAM (OKVIS) based on a nonlinear optimization framework, which uses IMU pre-integration [9] to avoid repeated IMU integration and the first-in-last-out sliding window method for bounding computation to handle all measurements. Stereo requires high computing cost to generate the corresponding depth information Compared with these methods, RGB-D cameras can obtain both color images and aligned depth information, which can simplify the triangulation of point features and achieve a faster initialization process. This paperkinematic achieves an RGB-D-inertial nonlinear optimization with constraints the IMU model and the reprojection of points andframework lines in sliding windows. The results validate the accuracy and robustness of the system proposed

Notations and Definitions
IMU Pre-Integration
Representation of 3D Line Features
Overall Structure of the VIO System
Nonlinear Optimization Framework
System
System Initialization
Visual-IMU Alignment
IMU Measurement Residual
Visual Reprojection Residual
Marginalization
Experiment
STAR-Center Dataset
OpenLORIS-Scene Dataset
OpenLORIS-Scene
Running Time Performance Evaluation
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call