Abstract

This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE) optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.

Highlights

  • Visual odometry involves the estimation of camera motion and the motion of the vehicle the camera is attached to, using a sequence of camera images

  • Hyper-complex Wavelet (HCW) is based on the definition of the 2-D Hilbert Transform (HT) and analytic signal [25] according to the theory of quaternion

  • The optical flow vector estimated in the HCW space on a larger scale only guides candidate image motions on a smaller scale

Read more

Summary

Introduction

Visual odometry involves the estimation of camera motion and the motion of the vehicle the camera is attached to, using a sequence of camera images. Choi [20] presented a feature initialization and monocular EKF method for indoor-environment SLAM, and Milford and Wyeth [21] presented an appearance-based method to extract approximate rotational and translational velocity information from a single-perspective camera mounted on a car, which was used in a RatSLAM scheme. They used template tracking at the centre of the scene.

Basic Theory and Methods
Monocular ego-motion based on optical flow
Support probability and estimation
Maximum-likelihood optical flow model
Monocular Ego-motion Using MLE on Optical Flow
Experiment Results and Discussion
MLE optical flow computation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.