Abstract

We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent’s hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios.

Highlights

  • Many animals have excellent navigation capabilities and outperform current technical systems especially in terms of robustness

  • While the resulting Slow Feature Analysis (SFA)-units may already be sufficient for performing a navigation task, we want to have a metric measure to assess localization performance

  • To make a fair comparison we provided the simultaneous localization and mapping (SLAM)-methods with image data from the training and test run and measured the performance on the test run

Read more

Summary

Introduction

Many animals have excellent navigation capabilities and outperform current technical systems especially in terms of robustness. In rodents spatial information is encoded by different cell types in the hippocampus. Place cells and head-direction cells encode the position and orientation of the animal and are strongly driven by visual input [1]. While the sensory signals of single receptors may change very rapidly, e.g., even by slight changes in orientation, the brain’s high level representations of position and orientation typically change on a much lower timescale. This observation has led to the concept of slowness learning [2,3,4,5]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.