Abstract

Recent developments in reinforcement learning have been able to derive optimal policies for sophisticated and capable agents, and shown to achieve human-level performance on a number of challenging tasks. Unfortunately, when it comes to multi-agent systems (MASs), complexities such as non-stationarity and partial observability bring new challenges to the field. Building a flexible and efficient multi-agent reinforcement learning algorithm capable of handling complex tasks has to date remained an open challenge. This paper presents an Multi-Agent learning system with the evolution of Social Roles (eSRMA). The main interest is placed on solving the key issues in the definition and evolution of suitable roles, and optimizing the policies accompanied by social roles in MAS efficiently. Specifically, eSRMA incorporates and cultivates role division awareness of agents to improve the ability to deal with complex cooperative tasks. Each agent is assigned a role module, which can dynamically generate roles based on the individuals’ local observations. A novel multi-agent reinforcement learning algorithm (MARL) is designed as the principal driving force that governs the role-policy learning process by a role-attention credit assignment mechanism. Moreover, a role evolution process is developed to help agents dynamically choose appropriate roles in decision-making. Comprehensive experiments on the StarCraft II micromanagement benchmark have demonstrated that eSRMA exhibits superiority in achieving higher learning capability and efficiency for multiple agents compared to the state-of-the-art MARL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call