Abstract

This letter presents a novel, compute-efficient and training-free approach based on Histogram-of-Oriented-Gradients (HOG) descriptor for achieving state-of-the-art performance-per-compute-unit in Visual Place Recognition (VPR). The inspiration for this approach (namely CoHOG) is based on the convolutional scanning and regions-based feature extraction employed by Convolutional Neural Networks (CNNs). By using image entropy to extract regions-of-interest (ROI) and regional-convolutional descriptor matching, our technique performs successful place recognition in changing environments. We use viewpoint- and appearance-variant public VPR datasets to report this matching performance, at lower RAM commitment, zero training requirements and 20 times lesser feature encoding time compared to state-of-the-art neural networks. We also discuss the image retrieval time of CoHOG and the effect of CoHOG's parametric variation on its place matching performance and encoding time.

Highlights

  • F OR A ROBOT to operate autonomously, it needs to be able to remember previously visited places

  • We propose a novel technique based on handcrafted feature descriptors delivering state-of-the-art Visual Place Recognition (VPR) performance with no training requirements compared to Convolutional Neural Networks (CNNs)

  • This section first discusses the experimental setup used in our analysis including the VPR datasets, VPR techniques and evaluation metric used for assessing CoHOG’s performance

Read more

Summary

Introduction

F OR A ROBOT to operate autonomously, it needs to be able to remember previously visited places. This ability to remember places has been discussed and widely researched (surveyed by Lowry et al [1]) as the sub-domain of visualSLAM (Simultaneous Localization and Mapping), namely Visual Place Recognition (VPR). VPR is a well-defined, albeit a highly challenging problem since places change their appearance rapidly due to varying viewpoints and conditions. Texture-less and low-informative scenes pose difficulty to place matching. The task of a VPR system is to retrieve the best matched image of the same place

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call