Abstract

Although the introduction of commercial RGB-D sensors has enabled significant progress in the visual navigation methods for mobile robots, the structured-light-based sensors, like Microsoft Kinect and Asus Xtion Pro Live, have some important limitations with respect to their range, field of view, and depth measurements accuracy. The recent introduction of the second- generation Kinect, which is based on the time-of-flight measurement principle, brought to the robotics and computer vision researchers a sensor that overcomes some of these limitations. However, as the new Kinect is, just like the older one, intended for computer games and human motion capture rather than for navigation, it is unclear how much the navigation methods, such as visual odometry and SLAM, can benefit from the improved parameters. While there are many publicly available RGB-D data sets, only few of them provide ground truth information necessary for evaluating navigation methods, and to the best of our knowledge, none of them contains sequences registered with the new version of Kinect. Therefore, this paper describes a new RGB-D data set, which is a first attempt to systematically evaluate the indoor navigation algorithms on data from two different sensors in the same environment and along the same trajectories. This data set contains synchronized RGB-D frames from both sensors and the appropriate ground truth from an external motion capture system based on distributed cameras. We describe in details the data registration procedure and then evaluate our RGB-D visual odometry algorithm on the obtained sequences, investigating how the specific properties and limitations of both sensors influence the performance of this navigation method.

Highlights

  • IntroductionAvailable data for algorithm evaluation are instrumental to achieving scientific progress in many disciplines

  • To ease the use of the data, we provide the ground truth trajectory of the Kinect v1 and the Kinect v2 with respect to the global coordinate system G, which was computed with the following equation: G PP Ki = G Ki, (10)

  • The article introduces the PUTK2 RGB-D data set for the evaluation of robot navigation algorithms

Read more

Summary

Introduction

Available data for algorithm evaluation are instrumental to achieving scientific progress in many disciplines. Providing a common ground for objective and reliable bench marking improves research transparency and reproducibility [21]. The fields of computer vision and robotics are no exception. One relevant problem in robotics—the vision-based mobile robot navigation—is situated on the crossroads of these fields. Recent launch of compact, inexpensive RGB-D sensors based on structured-light [18] or time-offlight cameras [19] provided the robotics community with an attractive sensing solution, enabling the integration of the depth and vision data within the robot navigation processing pipeline for increased accuracy

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call