Abstract

Current mainstream visual odometry method often suffers from tracking lost in low texture and motion blur scenarios due to fewer effective features and difficulty in getting stable matches. And feature matching process affects the overall real-time performance. For the fast localization task in low texture environments, this paper proposes an efficient self-supervised direct visual odometry framework based on keypoint extraction network, HGCN-VO. First we build the half-geometric correspondence network, HGCN, for fast extraction of robust keypoints in images. During training, we propose training method which uses basic shape elements to render generated simulated images with pseudo-labels as well as random homography transformations on real images for pre-training and migration learning and optimizing the keypoint loss from forward and reverse perspective transformed images. Finally we optimize the inter-frame position using a multilayer sparse direct method combined with bundle adjustment to improve the robustness of the method in low texture environments while increasing the processing speed. We evaluated the proposed method in KITTI, TUM, and challenging low-texture real-world scenarios and compared it with the current mainstream visual odometry methods, and the results show that the algorithm is sufficiently robust and accurate in low-texture environments and has a fast processing speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call