Abstract

Multi-agent system investigates the problem of designing a complex system composed of multiple autonomous agents with limited ability and partial observability. As a milestone, AlphaStar has achieved remarkable success in StarCraft II, which is a significant breakthrough in the competitive environments with complex strategic spaces and real-time decisions. However, it poses new challenges for deploying these centralized control models in real-world environments because many of them in such competitive environments were not designed to accommodate the requirements of real-world communication networks, e.g., the problems of high latency and large traffic are inevitable when they are actually deployed. To alleviate this issue, we propose a distributed control paradigm that explicitly splits the control power between the centralized meta-agent and agent units through a combination of centralized and decentralized paradigms. The units can autonomously decide to follow the decisions of the meta-agent or adapt to environment variations immediately by themselves in a decentralized manner. We simulate real-world network environments based on the Mininet platform, experiments based on the StarCraft II Learning Environment (SC2LE) show that our approach achieves a better adaptation in real-world network environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call