Abstract

Standard reinforcement learning methods are inefficient and often inadequate for learning cooperative multi-agent tasks. For these kinds of tasks the behavior of one agent strongly depends on dynamic interaction with other agents, not only with the interaction with a static environment as in standard reinforcement learning. The success of the learning is therefore coupled to the agents' ability to predict the other agents behaviors. In this study we try to overcome this problem by adding a few simple macro actions, actions that are extended in time for more than one time step. The macro actions improve the learning by making search of the state space more effective and thereby making the behavior more predictable for the other agent. In this study we have considered a cooperative mating task, which is the first step towards our aim to perform embodied evolution, where the evolutionary selection process is an integrated part of the task. We show, in simulation and hardware, that in the case of learning without macro actions, the agents fail to learn a meaningful behavior. In contrast, for the learning with macro action the agents learn a good mating behavior in reasonable time, in both simulation and hardware.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call