Abstract

An autonomous robot that acts in a complex environment needs to learn which action should be performed in each external state. Therefore, we propose a new control architecture for autonomous robots. The control architecture includes a learning system, which consists of globally coupled chaotic elements. A chaotic element has dynamics which are designed so that the elements can collectively execute reinforcement learning. Although each element continuously updates its internal state according to its intrinsic dynamics, this local processing collectively determines an action of the robot and allows it to interact with its environment. The result of this interaction returns to the elements as a payoff, which changes the dynamics. We carried out computational experiments for a navigation task. The results show that the chaotic elements self-organize so that an autonomous mobile robot exhibits such behaviors as goal-reaching, wall-following and collision avoidance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call