Abstract

Despite the increasing interest in multiagent reinforcement learning (MARL) in multiple communities, understanding its theoretical foundation has long been recognized as a challenging problem. In this article, we address this problem by providing a finite-sample analysis for decentralized batch MARL. Specifically, we consider a type of mixed MARL setting with both cooperative and competitive agents, where two teams of agents compete in a zero-sum game setting, while the agents within each team collaborate by communicating over a time-varying network. This setting covers many conventional MARL settings in the literature. We then develop batch MARL algorithms that can be implemented in a decentralized fashion, and quantify the finite-sample errors of the estimated action-value functions. Our error analysis captures how the function class, the number of samples within each iteration, and the number of iterations determine the statistical accuracy of the proposed algorithms. Our results, compared to the finite-sample bounds for single-agent reinforcement learning, involve additional error terms caused by decentralized computation, which is inherent in our decentralized MARL setting. This article provides the first finite-sample analysis for batch MARL, a step toward rigorous theoretical understanding of general MARL algorithms in the finite-sample regime.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call