Abstract

Navigation in unknown dynamic environments still remains a major challenge in robotics. Whereas insects like the desert ant with very limited computing and memory capacities solve this task with great efficiency. Thus, the understanding of the underlying neural mechanisms of insect navigation can inform us on how to build simpler yet robust autonomous robots. Based on recent developments in insect neuroethology and cognitive psychology, we propose a method for landmark navigation in dynamic environments. Our method enables the navigator to learn the reliability of landmarks using an expectation reinforcement method. For that end, we implemented a real-time neuronal model based on the Distributed Adaptive Control framework. The results demonstrate that our model is capable of learning the stability of landmarks by reinforcing its expectations. Also, the proposed mechanism allows the navigator to optimally restore its confidence when its expectations are violated. We also perform navigational experiments with real ants to compare with the results of our model. The behavior of the proposed autonomous navigator closely resembles real ant navigational behavior. Moreover, our model explains navigation in dynamic environments as a memory consolidation process, harnessing expectations and their violations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call