Abstract

Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., plane, line, and point) from color and depth images, which provides reliable constraints even in uncharacteristic environments with low texture or no consistent lines. Then, we construct a cost function based on these features and, by minimizing this function, we obtain the accurate rotation matrix of each captured frame with respect to its reference keyframe. Furthermore, we present a vanishing direction-estimation method to extract the Manhattan World (MW) axes; by aligning the current MW axes with the global MW axes, we refine the aforementioned rotation matrix of each keyframe and achieve drift-free orientation. Experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for orientation estimation. In addition, we have applied our proposed visual compass to pose estimation, and the evaluation on public sequences shows improved accuracy.

Highlights

  • Robust orientation estimation is of great significance in robotics tasks such as motion control, autonomous navigation, and 3D mapping

  • The ICL-NUIM dataset is a collection of handheld RGB-D camera sequences within synthetically generated environments

  • These sequences were captured in a living room and an office room with perfect ground-truth poses to fully quantify the accuracy of a given visual odometry or simultaneous localization and mapping (SLAM)

Read more

Summary

Introduction

Robust orientation estimation is of great significance in robotics tasks such as motion control, autonomous navigation, and 3D mapping. Orientation can be obtained by utilizing carried sensors like the wheel encoder, inertial measurement unit (IMU) [1,2,3,4], or cameras [5,6,7] Among these solutions, the visual-based method [8,9,10,11] is effective, as cameras can conveniently capture informative images to estimate orientation and position. Proposed a global description method based on Radon Transform to estimate robots’ position and orientation with the equipped catadioptric vision sensor. These methods show good performance in estimating orientation from captured images. Local and global maps’ construction or loop detection is needed in these approaches to reduce drift error

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call