Abstract

Relocalization is a mandatary task for robots or unmanned vehicles which need to work under changing environments over the long term. Recently, deep learning has been successfully applied to visual relocalization, where the absolute camera pose can be regressed from monocular RGB images. However, the resulting accuracy is still sub-optimal. In this paper, we propose a deep relocalization network which regresses the global poses of the images in the scenario. Our model takes in tuples of images and enforces constraints between pose predictions for pairs as an additional loss term in training. The features coded from convolutional layers are further enhanced by two bidirectional LSTM in a structured way to boost the features’ inner correlation, which is the main contribution of the work. Then the we use unlabeled data to further fine-tune the network and during inference we perform pose graph optimization (PGO) to get smoother and globally consistent pose predictions. Experiments on public indoor and outdoor dataset demonstrate that our model achieved better relocalization performance than the baseline and is more robust to illumination changes, texture-less areas and repetitive structures in the scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call