Abstract

Visual localization plays an indispensable role in robotics. Both learning and hand-crafted feature based methods for relocalization process keep their effectiveness and weakness. However, current algorithms seldom consider these two kinds of features under one framework. In this paper, focusing on this task, we propose a novel relocalization framework for RGB or RGB-D data source, which is composed of coarse localization process by learning features and pose refinement by hand-crafted features. In particular, coarse stage contains deep point cloud generation and registration. In this stage, instead of regressing camera pose directly, the paper novelly designs a neural network called PGNet to construct sparse point cloud with RGB or RGB-D as inputs. Further more, by means of training set, hand-crafted feature space is established. Based on the obtained camera pose in coarse stage, accurate point-to-point correspondences are set up through searching the space. Then accurate camera pose is obtained by applying RANSAC to correspondences or solving PnP. Finally, experiments on both outdoor and indoor benchmark datasets demonstrate state-of-the-art performance over other existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call