Abstract

Vision-based simultaneous localization and mapping (SLAM) technology is the key to realize autonomous navigation of mobile robots. When the robot is in an unfamiliar environment, it usually uses the point features of the surrounding environment to estimate its pose. However, if the feature information in the environment is not rich and there are many dynamic objects, the camera trajectory cannot be accurately estimated. To this end, this paper proposed an RGB-D visual odometry that combines point features and line features simultaneously. The dynamic line features are eliminated by calculating the static weight of the line features, and the camera pose is estimated based on the point features and the remaining line features. Compared with other feature-based SLAM systems, the performance and accuracy of systematic pose estimation can be improved in the absence of feature points or dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call