Abstract

The existing reinforcement learning methods have been seriously suffering from the curse of the dimension problem, especially when they are applied to multiagent dynamic environments. One of the typical examples is a case of RoboCup competitions since other agents and their behavior easily cause state and action space variations. This paper presents a method of modular learning in a multiagent environment by which the learning agent can acquire cooperative behavior with its teammates and competitive behavior against its opponents. The key ideas to resolve the issue are as follows. First, a two-layer hierarchical system with multilearning modules is adopted to reduce the size of the sensor and action spaces. The state space of the top layer consists of the state values from the lower level and the macro actions are used to reduce the size of the physical action space. Second, the state of the other, to what extent it is close to its own goal, is estimated by observation and used as a state variable in the top layer state space to realize the cooperative/competitive behavior. The method is applied to a four (defense team)-on-five (offense team) game task and the learning agent (a passer of the offense team) successfully acquired the teamwork plays (pass and shoot) within much shorter learning time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call