Abstract

In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm.

Highlights

  • Self-localization is widely recognized as one of the most basic problems for an autonomous robot with respect to navigation

  • We propose a new RGB-D simultaneous localization and map building (SLAM) method based on visual odometry (VO) and the extended information filter (EIF), referred to as VO-EIF RGB-D SLAM

  • When any of the following two conditions are satisfied, the filter update will execute: the accumulated changes of x, y or z of the visual odometry are greater than t1, or any of the accumulated changes of roll, pitch or yaw of the visual odometry are greater than t2

Read more

Summary

Introduction

Self-localization is widely recognized as one of the most basic problems for an autonomous robot with respect to navigation. This paper has the following contributions: (1) we adopt the method based on the extended information filter to decrease the dimensions for a high-dimensional state space; (2) inspired by the related works [12,13,14], we employ the binary feature descriptor for feature matching to reduce the complexity effectively; (3) we build an RGB-D feature observation model that combines the 3D positions and the binary descriptors of the landmarks and that avoids explicit data association between the observations and map; and (4) we devise a visual dead reckoning algorithm based on visual residuals, which is used to estimate motion control input, to avoid over-estimation.

Related Research
Extended Information Filter Approach to SLAM
RGB-D Image Feature Descriptor
The Overall RGB-D Observation Model
RGB-D Correlation Model of Observation
Motion Model
Camera Model
Warping Function
Probabilistic Estimation
Optimization of Motion Estimation
Experimental Results
Lab Environment Results
Benchmark Results
Method
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call