Abstract

When mobile robots are working in indoor unknown environments, the surrounding scenes are mainly low texture or repeating texture. This means that image features are easily lost when tracking the robots, and poses are difficult to estimate as the robot moves back and forth in a narrow area. In order to improve such tracking problems, we propose a one-circle feature-matching method, which refers to a sequence of the circle matching for the time after space (STCM), and an STCM-based visual-inertial simultaneous localization and mapping (STCM-SLAM) technique. This strategy tightly couples the stereo camera and the inertial measurement unit (IMU) in order to better estimate poses of the mobile robot when working indoors. Forward backward optical flow is used to track image features. The absolute accuracy and relative accuracy of STCM increase by 37.869% and 129.167%, respectively, when compared with correlation flow. In addition, we compare our proposed method with other state-of-the-art methods. In terms of relative pose error, the accuracy of STCM-SLAM is an order of magnitude greater than ORB-SLAM2, and two orders of magnitude greater than OKVIS. Our experiments show that STCM-SLAM has obvious advantages over the OKVIS method, specifically in terms of scale error, running frequency, and CPU load. STCM-SLAM also performs the best under real-time conditions. In the indoor experiments, STCM-SLAM is able to accurately estimate the trajectory of the mobile robot. Based on the root mean square error, mean error, and standard deviation, the accuracy of STCM-SLAM is ultimately superior to that of either ORB-SLAM2 or OKVIS.

Highlights

  • The recent development of artificial intelligence and computer vision has led to unprecedented growth in the robotics industries

  • ORB-SLAM2 and OKVIS are state-of-the-art methods used in visual simultaneous localization and mapping (SLAM) and visual-inertial SLAM (VI-SLAM), respectively

  • WORK This paper investigates the effect of tightly coupling the stereo camera and inertial measurement unit (IMU) in order to better estimate the position of mobile robots in unknown environments without occlusions

Read more

Summary

INTRODUCTION

The recent development of artificial intelligence and computer vision has led to unprecedented growth in the robotics industries. C. Chen et al.: Stereo Visual-Inertial SLAM Approach for Indoor Mobile Robots in Unknown Environments Without Occlusions texture indoor scenes, where the mobile robot is required to climb, accelerate, decelerate, and emergency stop, image features can be hard to track, and the scale estimation currently has errors. Chen et al.: Stereo Visual-Inertial SLAM Approach for Indoor Mobile Robots in Unknown Environments Without Occlusions texture indoor scenes, where the mobile robot is required to climb, accelerate, decelerate, and emergency stop, image features can be hard to track, and the scale estimation currently has errors To solve this issue, robust real-time localization and mapping systems are essential. Estimate the trajectory of the mobile robot with more accuracy than either ORB-SLAM2 or OKVIS

RELATED WORK
ERROR FUNCTION
VISUAL CONSTRAINT
IMU CONSTRAINT
FORWARD BACKWARD OPTICAL FLOW
MARGINALIZATION
LOCALIZATION ACCURACY
Findings
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call