Abstract

Among various value decomposition-based multiagent reinforcement learning (MARL) algorithms, the overall performance of the multiagent system is represented by a scalar global Q value and optimized by minimizing the temporal difference (TD) error with respect to that global Q value. However, the global Q value cannot accurately model the distributed dynamics of the multiagent system, since it is only a simplified representation for different individual Q values of agents. To explicitly consider the correlations between different cooperative agents, in this article, we propose a distributional framework and construct a practical model called distributional multiagent cooperation (DMAC) from a novel distributional perspective. Specifically, in DMAC, we view the individual Q value for the executed action of a random agent as a value distribution, whose expectation can further represent the overall performance. Then, we employ distributional RL to minimize the difference between the estimated distribution and its target for the optimization. The advantage of DMAC is that the distributed dynamics of agents can be explicitly modeled, and this results in better performance. To verify the effectiveness of DMAC, we conduct extensive experiments under nine different scenarios of the StarCraft Multiagent Challenge (SMAC). Experimental results show that the DMAC can significantly outperform the baselines with respect to the average median test win rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call