Abstract

Estimating the camera trajectories is very important for the performance of visual simultaneous localization and mapping. However, visual simultaneous localization and mapping systems based on RGB image are generally not robust in complex situations such as low-textures or large illumination variations. In order to solve this problem, more environmental information is added by introducing depth information, and a feature extraction and matching algorithm combining depth information is proposed. In this article, firstly, the intrinsic mechanism that depth image is used to extract and match feature points is discussed. Then depth information and appearance information are comprehensively considered to extract and describe feature points. Finally, the matching problem of feature points is transformed into a regression and classification problem, with which a matching model is presented in a data-driven way. Experimental results show that our algorithm has better distribution uniformity and matching accuracy and can effectively improve the trajectory accuracy and drift degree of the simultaneous localization and mapping system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call