Abstract

A fast and robust algorithm for motion estimation based on stereo vision is proposed. The algorithm consists of two consecutive stages. First, interest points are found and tracked between four images from two consecutive time steps. A novel robust method for tracking features is proposed for this step to address shortcomings of the traditional methods for indoor hand-held/wearable scenarios. Next, motion of the camera in time is estimated via optimizing the total reprojection error associated with the robust features found using the first stage. A key assumption is that motion between the two time steps is sufficiently small or equivalently, the frames are captured close enough in time. The proposed Visual Ego-motion Estimation (VEE), is robust due to robust feature tracking in the first step, and fast due to efficient reprojection error minimization used to estimate the six-degree-of-freedom (6DoF) motion of the camera. Experiments prove comparable/superior results compared to the state-of-the-art methods in the literature especially for indoor handheld scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.