Abstract

In this paper, we present a novel tightly-coupled probabilistic monocular visual-odometric Simultaneous Localization and Mapping algorithm using wheels and a MEMS gyroscope, which can provide accurate, robust and long-term localization for the ground robot moving on a plane. Firstly, we present an odometer preintegration theory that integrates the wheel encoder measurements and gyroscope measurements to a local frame. The preintegration theory properly addresses the manifold structure of the rotation group SO(3) and carefully deals with uncertainty propagation and bias correction. Then the novel odometer error term is formulated using the odometer preintegration model and it is tightly integrated into the visual optimization framework. Furthermore, we introduce a complete tracking framework to provide different strategies for motion tracking when (1) both measurements are available, (2) visual measurements are not available, and (3) wheel encoder experiences slippage, which leads the system to be accurate and robust. Finally, the proposed algorithm is evaluated by performing extensive experiments, the experimental results demonstrate the superiority of the proposed system.

Highlights

  • Simultaneous localization and mapping (SLAM) from onboard sensors is a fundamental and key technology for autonomous mobile robot to safely interact within its workspace

  • We study the monocular vision-based localization and mapping algorithm for domestic ground robots moving on a plane, such as cleaning robot, nursing robot, and restaurant robot waiter

  • Since the proposed visual-odometric SLAM (VOSLAM) system requires a good initial value to bootstrap, we present a simple initialization method that builds an initial map of the environment with scale and selectively estimates the initial value of gyroscope bias

Read more

Summary

Introduction

Simultaneous localization and mapping (SLAM) from onboard sensors is a fundamental and key technology for autonomous mobile robot to safely interact within its workspace. SLAM is a technique that builds a globally consistent representation of the environment (i.e. the map) and estimates the state of robot in the map simultaneously. Because SLAM can be used in many practical applications, such as autonomous driving, indoor service robots, and virtual or augmented reality, it has received considerable attention from Robotics and Computer Vision communities. We study the monocular vision-based localization and mapping algorithm for domestic ground robots moving on a plane, such as cleaning robot, nursing robot, and restaurant robot waiter. When localizing the domestic ground robots, planar-motion constraint is often used to improve the localization accuracy. There are two ways to use planar-motion constraint.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call