Abstract

Simultaneous Localization and Mapping (SLAM) combining visual and inertial measurements has achieved significant attention in the community of Robotics and Computer Vision. However, it is still a challenge to balance real-time requirements and accuracy. Therefore, this paper proposes a feedback mechanism for stereo Visual-Inertial SLAM (VISLAM) to provide accurate and real-time motion estimation and map reconstruction. The key idea of the feedback mechanism is that the frontend and backend in the VISLAM system can promote each other. The results of the backend optimization are fed back to the Kalman Filter (KF)-based frontend to reduce the motion estimate error caused by the well-known linearization of the KF estimator. Conversely, this more accurate motion estimate of the frontend can accelerate the backend optimization since it provides a more accurate initial state for the backend. In addition, we design a relocalization and continued SLAM framework with the feedback mechanism for the application of autonomous robot navigation or continuing SLAM. We evaluated the performance of the proposed VISLAM system through experiments on public EuRoC dataset and real-world environments. The experimental results demonstrate that our system is a promising VISLAM system compared with other state-of-the-art VISLAM systems in terms of both computing cost and accuracy.

Highlights

  • Visual Odometry (VO) [1], [2] and Visual Simultaneous Localization and Mapping (VSLAM) [3], [4] techniques, as the solutions of localization and mapping in GPS-denied environments, have been extensively studied in many applications to computer vision, Augmented and Virtual Reality (AR&VR), and mobile robotics [5]

  • A general optimizationbased Visual Inertial Simultaneous Localization and Mapping (VISLAM) framework [35] was proposed that extends the previous work [16] to adapt to multiple sensors, in which, the state of the system and a representation of the environment are estimated by local Bundle Adjustment (BA) in one thread, and loops are closed in lightweight manner in parallel thread

  • We compare the proposed algorithm with S-MSCKF [31], a state-of-the-art filtering-based Visual Inertial Odometry (VIO), and VINS-Fusion [35], a state-of-the-art optimization-based VISLAM that can work with three different combinations of sensors

Read more

Summary

INTRODUCTION

Visual Odometry (VO) [1], [2] and Visual Simultaneous Localization and Mapping (VSLAM) [3], [4] techniques, as the solutions of localization and mapping in GPS-denied environments, have been extensively studied in many applications to computer vision, Augmented and Virtual Reality (AR&VR), and mobile robotics [5]. The pure vision-based VO/VSLAM methods are sensitive to the challenging scenarios, such as textureless surfaces, motion blur, occlusions and illumination changes [6]–[9] To address these problems, Visual Inertial Odometry (VIO) or Visual Inertial Simultaneous Localization and Mapping (VISLAM) techniques [10] fuse Inertial Measurement Unit (IMU) data to the VO/VSLAM system and achieve more robustness and higher accuracy even in the above challenging scenarios. To the best of our knowledge, this paper is the first tightly-coupled stereo VISLAM system that combines filtering-based frontend and optimization-based backend through a feedback mechanism, resulting in a significant improvement in accuracy and efficiency of the 6 DoF pose estimation.

RELATED WORK
BACKEND OPTIMIZATION
FEEDBACK MECHANISM
RELOCALIZATION AND CONTINUED SLAM FRAMEWORK WITH FEEDBACK MECHANISM
RELOCALIZATION
EXPERIMENTS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call