Abstract

Motivation. For certain applications of autonomous mobile robots — surveillance, cleaning or exploration come immediately to mind — it is attractive to employ several robots simultaneously. Tasks such as the ones mentioned above are easily divisible between independent robots, and using several robots simultaneously promises a speedup of task execution, as well as more reliable and robust performance. For any robot operating in the real world, the question of how control is to be achieved is of prime importance. While fixed behavioural strategies, defined by the user, can indeed be used to control robots, they tend to be brittle in practice, due to the noisy and partly unpredictable nature of the real world. Therefore, instead of using fixed and pre-defined control procedures, learning is an attractive alternative. To determine a suitable control strategy for a mobile robot operating in noisy and possibly dynamic environments through learning requires a search through a very large state space. By parallelising this process through the use of several robots and collaborative learning, this learning process can be accelerated. A physically embedded GA (PEGA). In this paper, we present experiments conducted with two communicating mobile robots. Each robot’s control policy was encoded through a genetic string. By communicating genetic strings and fitnesses to one another at regular intervals, robots modified their individual control policy, using a genetic algorithm (GA). Contrary to common GA approaches, we did not use a simulate-and-transfer method, but implemented the GA directly on the robots. We were able to show that the following competences can all be acquired using the PEGA approach:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call