Abstract

The Point feature and line feature have been widely used in visual SLAM(simultaneous localization and mapping) algorithm. But most of these methods assume that the environments are static, ignoring that there are often dynamic objects in real world, which can degrade the SLAM performance. In order to solve this problem, a line-expanded visual odometry is proposed. It calculates optical flow between two adjacent frames to identify and eliminate dynamic point features in dynamic objects, and use the rest of point features to find the collinear relationship to expand line features for visual SLAM algorithm based on point features. Final it use the rest of point features and line features to estimate the camera pose. The proposed method not only reduces the influence of dynamic objects, but also avoids the tracking failure caused by few point features. The experiments are carried out on a TUM dataset. Compared with state-of-the-art methods like ORB (oriented FAST and rotated BRIEF) method and ORB add optical flow method, the results demonstrate that the proposed method reduces the tracking error and improve the robustness and accuracy of visual odometry in dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call