Abstract
This work studies Centralized Training and Decentralized Execution (CTDE), which is a powerful mechanism to ease multi-agent reinforcement learning. Although the centralized evaluation ensures unbiased estimates of Q-value, peers with unknown policies make the decentralized policy far from the expectation. To make progress in more stabilized and effective joint policy, we develop a novel game framework, termed Cournot Policy Model, to enhance the CTDE-based multi-agent learning. Combining the game theory and reinforcement learning, we regard the joint decision-making in a single time step as a Cournot duopoly model, and then design a Hetero Variational Auto-Encoder to model the policies of peers in the decentralized execution. With a conditional policy, each agent is guided to a stable mixed-strategy equilibrium even though the joint policy evolves over time. We further demonstrate that such an equilibrium must exist in the case of centralized evaluation. We investigate the improvement of our method on existing centralized learning methods. The experimental results on a comprehensive collection of benchmarks indicate our approach consistently outperforms baseline methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.