Abstract

Visual localization under varying conditions such as changes in illumination, season and weather is a fundamental task for applications such as autonomous navigation. In this paper, we present a novel method of using semantic information for global image retrieval. By exploiting the distribution of different classes in a semantic scene, the discriminative features of the scene’s structure layout is embedded into a normalized vector that can be used for retrieval, i.e. semantic retrieval. Color image retrieval is based on low-level visual features extracted by algorithms or Convolutional Neural Networks (CNNs), while semantic retrieval is based on high-level semantic features which are robust in scene appearance variations. By combining semantic retrieval with color image retrieval in the global retrieval step, we show that these two methods can complement with each other and significantly improve the localization performance. Experiments on the challenging CMU Seasons dataset show that our method is robust across large variations of appearance and achieves state-of-the-art localization performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call