Abstract

State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead visual place recognition techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophila melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent visual place recognition compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms ofarea under the precision-recall curve results and computational efficiency.

Highlights

  • V ISUAL place recognition (VPR) refers to the ability of a computer system to determine if it has previously visited a given place using visual information

  • Performing highly robust and reliable VPR is a key feature for autonomous robotic navigation as Simultaneous Localization and Mapping (SLAM) systems are dependent on loop-closures mechanisms for map correction [1]

  • We present two novel lightweight VPR algorithms as our contributions in this work: DrosoNet and the voting mechanism that builds on top of it

Read more

Summary

Introduction

V ISUAL place recognition (VPR) refers to the ability of a computer system to determine if it has previously visited a given place using visual information. A revisited place can look extremely different from when it was first seen and recorded due to a variety of changing conditions: seasonal changes [2], different viewpoints [3], illumination levels [4], dynamic elements [5] or any combination of these factors. It is possible for different places to appear identical, especially within the same environment, an error known as perceptual aliasing

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call