Abstract

Visual odometry is the process of estimating the motion of mobile through the camera attached to it, by matching point features between pairs of consecutive image frames. For mobile robots, a reliable method for comparing images can constitute a key component for localization and motion estimation tasks. In this paper, we study and compare the SIFT and SURF detector/ descriptor in terms of accurate motion determination and runtime efficiency in context the mobile robot-monocular visual odometry. We evaluate the performance of these detectors/ descriptors from the repeatability, recall, precision and cost of computation. To estimate the relative pose of camera from outlier-contaminated feature correspondences, the essential matrix and inlier set is estimated using RANSAC. Experimental results demonstrate that SURF, outperform the SIFT, in both accuracy and speed. 

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call