Abstract

In this paper, we tackle the problem of unsupervised selection and posterior recognition of visual landmarks in image sequences acquired by an indoor mobile robot. This is a highly valuable perceptual capability for a wide variety of robotic applications, in particular autonomous navigation. Our method combines a bottom-up data driven approach with top-down feedback provided by high level semantic representations. The bottom-up approach is based on three main mechanisms: visual attention, area segmentation, and landmark characterization. As there is no segmentation method that works properly in every situation, we integrate multiple segmentation algorithms in order to increase robustness of the approach. In terms of top-down feedback, this is provided by two information sources: (i) An estimation of the robot position that reduces the searching scope for potential matches with previously selected landmarks, (ii) A set of weights that, according to the results of previous recognitions, controls the influence of each segmentation algorithm in the recognition of each landmark. We test our approach with encouraging results in three datasets corresponding to real-world scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call