Abstract

The design of robust vision based robot navigation behaviors remains a challenge in mobile robotics as it requires a coherent mapping between a complex visual perception and its associated robot motion. This contribution proposes a framework to learn this general relationship from a small set of representative demonstrations in which an expert manually navigates the robot through its environment. Behaviors are represented by a dynamic system that ties the perceptions to actions. The state of the behavioral dynamics is characterized by a small set of visual features extracted from an omnidirectional image of the local environment. Recording, learning and generalization takes place in the product space of visual features and robot controls. Training instances are recorded for three distinctive behaviors namely corridor following, obstacle avoidance and homing. Behavioral dynamics are represented as Gaussian mixture models, parameters of which are identified from the recorded demonstrations. The learned behaviors are able to accomplish the task across a diverse set of initial poses and situations. In order to realize global navigation, the behaviors are coordinated via hand designed arbitration or command fusion schemes. The experimental validation of the proposed approach confirms that the acquired visual navigation behaviors in cooperation accomplish robust navigation in indoor environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call