Abstract

This paper presents a real-time, robust and low-drift depth-only SLAM (simultaneous localization and mapping) method for depth cameras by utilizing both dense range flow and sparse geometry features from sequential depth images. The proposed method is mainly composed of three optimization layers, namely Direct Depth layer, ICP (Iterative closest point) Refined layer and Graph Optimization layer. The Direct Depth layer uses a range flow constraint equation to solve the fast 6-DOF (six degrees of freedom) frame-to-frame pose estimation problem. Then, the ICP Refined layer is used to reduce the local drift by applying local map based motion estimation strategy. After that, we propose a loop closure detection algorithm by extracting and matching sparse geometric features and construct a pose graph for the purpose of global pose optimization. We evaluate the performance of our method using benchmark datasets and real scene data. Experiment results show that our front-end algorithm clearly over performs the classic methods and our back-end algorithm is robust to find loop closures and reduce the global drift.

Highlights

  • Visual odometry is gaining importance in the field of robotics and computer vision

  • The Iterative Closest Point (ICP) method further improves the accuracy of the transformation estimated by a sparse geometric feature matching method

  • In order to show the excellent performance of our front-end algorithm, we compared our method with other classic methods (ICP, Generalized ICP (GICP), Normal Distribution Transform (NDT)) on a publicly available benchmark TUM RGB-D

Read more

Summary

Introduction

State estimation, mapping and obstacle avoidance of mobile robots Most of these methods mainly rely on visual features. Several depth odometry or mapping methods have been proposed in recent years—for example, Sparse Depth Odometry (SDO) [1], SDF Tracker [2], DIFferential ODOmetry(DIFODO) [3] and Kinect Fushion [4], etc. Since those methods are only odometry and mapping methods, which lack global map optimization, they are unable to obtain global consistency trajectory in large-scale scene. Compared with RGB images, the depth images from current depth vision sensors still have low resolution, low frame rates as well as a Sensors 2018, 18, 3339; doi:10.3390/s18103339 www.mdpi.com/journal/sensors

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call