Abstract
Insects are able to navigate reliably between food and nest using only visual information. This behavior has inspired many models of visual landmark guidance, some of which have been tested on autonomous robots. The majority of these models work by comparing the agent's current view with a view of the world stored when the agent was at the goal. The region from which agents can successfully reach home is therefore limited to the goal's visual locale, that is, the area around the goal where the visual scene is not radically different to the goal position. Ants are known to navigate over large distances using visually guided routes consisting of a series of visual memories. Taking inspiration from such route navigation, we propose a framework for linking together local navigation methods. We implement this framework on a robotic platform and test it in a series of environments in which local navigation methods fail. Finally, we show that the framework is robust to environments of varying complexity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.