Abstract

AbstractFor any visual feature‐based SLAM (simultaneous localization and mapping) solutions, to estimate the relative camera motion between two images, it is necessary to find “correct” correspondence between features extracted from those images. Given a set of feature correspondents, one can use a n‐point algorithm with robust estimation method, to produce the best estimate to the relative camera pose. The accuracy of a motion estimate is heavily dependent on the accuracy of the feature correspondence. Such a dependency is even more significant when features are extracted from the images of the scenes with drastic changes in viewpoints and illuminations and presence of occlusions. To make a feature matching robust to such challenging scenes, we propose a new feature matching method that incrementally chooses a five pairs of matched features for a full DoF (degree of freedom) camera motion estimation. In particular, at the first stage, we use our 2‐point algorithm to estimate a camera motion and, at the second stage, use this estimated motion to choose three more matched features. In addition, we use, instead of the epipolar constraint, a planar constraint for more accurate outlier rejection. With this set of five matching features, we estimate a full DoF camera motion with scale ambiguity. Through the experiments with three, real‐world data sets, our method demonstrates its effectiveness and robustness by successfully matching features (1) from the images of a night market where presence of frequent occlusions and varying illuminations, (2) from the images of a night market taken by a handheld camera and by the Google street view, and (3) from the images of a same location taken daytime and nighttime.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call