Abstract

Image-based localization plays an important role in today's autonomous driving technologies. However, in large scale outdoor environments, challenging conditions, e.g., lighting changes or different weather, heavily affect image appearance and quality. As a key component of feature-based visual localization, image feature detection and matching deteriorate severely and cause worse localization performance. In this paper, we propose a novel method for robust image feature matching under drastically changing outdoor environments. In contrast to existing approaches which try to learn robust feature descriptors, we train a deep network that outputs the low-rank representations of the images where the undesired variations on the images are removed, and perform feature extraction and matching on the learned low-rank space. We demonstrate that our learned low-rank images largely improve the performance of image feature matching under varying conditions over a long period of time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call