Abstract

With the cheapening of optical camera technology, computer vision-based navigation algorithms have grown in popularity. Given the proliferation of optical alternative navigation techniques, such as Visual Odometry (VO), it is necessary to evaluate these algorithms in a way where they can be compared on common ground. While benchmarks like the KITTI Vision Benchmark Suite compare many VO algorithms, this paper compares three types of top-down monocular VO algorithms—a purely feature-based VO method, a purely optical flow-based VO method, and a hybrid method—fused with an inertial sensor on much more customizable and detailed level than before. The feature-based method uses an Accelerated-KAZE (AKAZE) feature detection and description and match descriptors using a brute force method. The optical flow method creates a grid of points which are tracked with a Lucas-Kanade tracker in order to create matches. Both methods rely on RANdom SAmple Consensus (RANSAC) for outlier rejection. With the assumption of known height above ground, the translation can be scaled to match real-world translations, allowing for a three-dimensional velocity update. The hybrid method will be based on a version of Semi-Direct Visual Odometry (SVO). SVO uses a combination of feature-level and pixel-level tracking. This method relies on the same assumption as the other methods (known height above ground) and provides the same type of update.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call