Abstract

Simultaneous localization and mapping (SLAM) systems have been generally limited to static environments. Moving objects considerably reduce the location accuracy of SLAM systems, rendering them unsuitable for several applications. Using a combined vision camera and inertial measurement unit (IMU) to separate moving and static objects in dynamic scenes, we improve the location accuracy and adaptability of SLAM systems in these scenes. We develop a moving object-matched feature points elimination algorithm that uses IMU data to eliminate matches on moving objects but retains them on stationary objects. Moreover, we develop a second algorithm to validate the IMU data to avoid erroneous data from influencing image feature points matching. We test the new algorithms with public datasets and in a real-world experiment. In terms of the root mean square error of the location absolute pose error, the proposed method exhibited higher positioning accuracy for the public datasets than the traditional algorithms. Compared with the closed-loop errors obtained by OKVIS-mono and VINS-mono, those obtained in the practical experiment were lower by 50.17% and 56.91%, respectively. Thus, the proposed method eliminates the matching points on moving objects effectively and achieves feature point matching results that are realistic.

Highlights

  • Depending on the type and number of cameras used, visual Simultaneous location and mapping (SLAM) can be divided into RGB-D SLAM, stereo SLAM, and monocular SLAM

  • Stereo SLAM obtains the depth of the feature point via two offset cameras, and the distance between them is the baseline for depth calculation

  • This paper proposes a method for eliminating feature points on moving objects using inertial measurement unit (IMU) data

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Li et al [17] proposed a real-time RGB-D SLAM system suitable for dynamic environments by using a static weighting method for edge points in the keyframe. Tan et al [18] presented a real-time monocular SLAM system with an online keyframe representation and updating method that reliably detects changed features from the key frames by projecting them to the current frame They proposed a novel prior-based adaptive random sampling consensus (RANSAC) algorithm to efficiently remove match outliers. When the number is sufficiently high, the IMU data is considered valid In this case, the feature point matches having large distances to the transformation matrix are considered abnormal matches (located on a moving object or erroneous match) that need to be eliminated.

Overview
Obtaining the Transformation Matrix from IMU Data
Obtaining
Distances between Feature Matches and the Fundamental Matrix
Eliminating Abnormal Matches Using IMU Data
Materials and Experimental Setup
ADVIO Dataset Experiment
Real-World Experiment
Figure
Visual
Self-Collected Data
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call