Abstract

This paper presents a robot self-localization method based on visual attention. This method takes advantage of the saliency-based model of attention to automatically learn configurations of salient visual landmarks along a robot path. During navigation, the visual attention algorithms detect a set of conspicuous visual features which are compared with the learned landmark configurations in order to determine the robot position on the navigation path. More specifically, the multi-cue attention model detects the most salient visual features that are potential candidates for landmarks. These features are then characterized by a visual descriptor vector computed from various visual cues and at different scales. By tracking the detected features over time, our landmarks selection procedure automatically evaluates their robustness and retains only the most robust features as landmarks. Further, the selected landmarks are organized into a topological map that is used for self-localization during the navigation phase. The self-localization method is based on matching between the currently detected visual features configuration and the configurations of the learned landmarks. Indeed, the matching procedure yields a probabilistic measure of the whereabouts of the robot. Thanks to the multi-featured input of the attention model, our method is potentially able to deal with a wide range of navigation environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call