Abstract

This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields.

Highlights

  • In order to accomplish higher-level tasks, autonomous mobile robots must typically be able to determine their pose while moving

  • The main novelty of this work, the combined constraint matching algorithm which includes the search for the maximum-weight clique on the graphs, is evaluated in terms of robustness and computational load for different descriptors, and it is compared with other three feature matching approaches

  • + epipolar geometry), (ii) the Best-bin-first (BBF) search method proposed by Beis and Lowe [21], which is a modification of the k-d tree algorithm, and (iii) the matching approach based on the combined constraint algorithm which uses the search for the maximum clique described in our previous work [22]

Read more

Summary

Introduction

In order to accomplish higher-level tasks, autonomous mobile robots must typically be able to determine their pose (position and orientation) while moving. To address this problem, absolute localization approaches usually employ the estimation of the robot’s displacement in the environment between consecutively acquired perceptions as one of their inputs. Absolute localization approaches usually employ the estimation of the robot’s displacement in the environment between consecutively acquired perceptions as one of their inputs This relative localization or pose tracking is performed using wheel odometry (from joint encoders) or inertial sensing (gyroscopes and accelerometers). Wheel odometry techniques cannot be applied to robots with non-standard locomotion methods, such as legged robots.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.