Abstract

Visual homing describes the ability of a robot to autonomously return to its starting position along a previously traversed path using visual information. In this paper, we propose a method for visual homing that is solely based on bearing angles to landmarks. During the first traversal of a path, the robot creates a sequence of viewframes, which are rotationally aligned landmark angle configurations at certain locations. While homing, the robot calculates homing vectors which subsequently align the currently perceived set of landmark observations with the reference viewframe until the home location is reached. This paper discusses methods for homing vector calculation and proposes new methods which are more robust and yield straighter homing paths in non-isotropic landmark distributions and false landmark matches. Furthermore, we present the Trail-Map, which is a novel data structure for storing a sequence of viewframes in a non-redundant and scalable way. The Trail-Map exploits the fact that the bearing angles to both distant landmarks and landmarks in the direction of movement hardly change when the robot moves, whereas close landmarks change their bearing angles quickly. Thus, the Trail-Map enables easy downscaling by deleting observations that correspond to nearby, fast-changing landmarks and, thus, retains the stable, translation invariant landmark information. We demonstrate the memory efficiency and scalability of the data structure in simulations and in real-world indoor and outdoor experiments. This makes the proposed method for visual homing suitable for mobile robots with limited computational and memory resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call