Abstract

RGB-D cameras that can provide rich 2D visual and 3D depth information are well suited to the motion estimation of indoor mobile robots. In recent years, several RGB-D visual odometry methods that process data from the sensor in different ways have been proposed. This paper first presents a brief review of recently proposed RGB-D visual odometry methods, and then presents a detailed analysis and comparison of eight state-of-the-art real-time 6DOF motion estimation methods in a variety of challenging scenarios, with a special emphasis on the trade-off between accuracy, robustness and computation speed. An experimental comparison is conducted using publicly available benchmark datasets and author-collected datasets in various scenarios, including long corridors, illumination changing environments and fast motion scenarios. Experimental results present both quantitative and qualitative differences between these methods and provide some guidelines on how to choose the right algorithm for an indoor mobile robot according to the quality of the RGB-D data and environmental characteristics.

Highlights

  • In the last decade, visual odometry (VO) [1] has become very popular in robotics and the computer vision commun‐ ity

  • It should be noted that the study of this paper only focuses on visual odometry methods, and various RGB-D visual simultaneous localization and mapping (V-SLAM) algorithms [11], [12] are not considered

  • The reason is that the passage is very narrow and the quick turn happened at a place very close to a wall, where Xtion RGB-D cameras cannot attain good RGB and depth data since the fast motion and the minimum measurement range of Xtion is 80cm

Read more

Summary

Introduction

Visual odometry (VO) [1] has become very popular in robotics and the computer vision commun‐ ity. In the past few years, several RGB-D visual odometry [2,3,4,5] estimation methods have been proposed. Reliability is still an issue that prevents these methods from on board guidance of a fully autonomous vehicle. These methods may fail in different types of challenging scenarios as they process the sensor data in different ways. Understanding the accuracy changes/reductions of these methods with respect to different challenges is important, and helps us design a more robust motion estimation method for steering an autonomous vehicle

Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call