Abstract

To solve the accurate positioning problem of mobile robots, simultaneous localization and mapping (SLAM) or visual odometry (VO) based on visual information are widely used. However, most visual SLAM or VO cannot meet the accuracy requirements in dynamic indoor environments. This paper proposes a robust visual odometry based on deep learning to eliminate feature points matching error. However, when a camera and dynamic objects are in relative motion, the frames of camera will produce ghosting, especially in high-dynamic environments, which bring additional positioning error; in view of this problem, a novel method based on the average optical flow value of the dynamic region is proposed to identify feature points of the ghosting, and then the feature points of the ghosting and dynamic region are removed. After the remaining feature points are matched, we use a non-linear optimization method to calculate the pose. The proposed algorithm is tested on TUM RGB-D dataset, and the results show that our VO improves the positioning accuracy than other robust SLAM or VO and is strongly robust especially in high-dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call