Abstract

This paper presents an unconventional approach to vision-guided autonomous navigation. The system recalls information about scenes and navigational experience using content-based retrieval from a visual database. To achieve a high applicability to various road types, we do not impose a priori scene features, such as road edges, that the system must use, but rather, the system automatically derives features from images during supervised learning. To accomplish this, the system uses principle component analysis and linear discriminant analysis to automatically derive the most expressive features (MEF) for scene reconstruction or the most discriminating features (MDF) for scene classification. These features best describe or classify the population of the scenes and approximate complex decision regions using piecewise linear boundaries up to a desired accuracy. A new self-organizing scheme called recursive partition tree (RPT) is used for automatic construction of a vision-and-control database, which quickly prunes the data set in the content-based search and results in a low time complexity of O(log( n)) for retrieval from a database of size n. The system combines principle component and linear discriminant analysis networks with a decision tree network. It has been tested on a mobile robot, Rome, in an unknown indoor environment to learn scenes and the associated navigation experience. In the performing phase, the mobile robot navigates autonomously in similar environments, while allowing the presence of scene perturbations such as the presence of passersby.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call