The coordination of multi-agent is one of the critical problems in Multi-Agent Reinforcement Learning (MARL). The traditional methods of MARL focus on finding a stochastically acceptable solution called Nash Equilibrium (NE) for all agents from the Markov game in which multiple equilibria exist. However, learning a fair equilibrium is crucial for the sustainability and stability of collaboration in the long-term coordination game, especially when the leadership competition exists. In this paper, we propose the bi-level reinforcement learning method N-Bi-AC, whose solution is a Pareto improvement for traditional NE, to choose a fair Equilibrium. There are two parts in our method, the first is that we propose the Negotiator to determine the leader in stage game, and the other is to update the Q-value of agents in the game by using a bi-level actor-critic learning method based on the Joint Mixed Strategy Equilibrium Q-learning algorithm (JMSE Q-learning). The convergence proof is given, and the learning algorithm is compared with the state-of-the-art algorithms. We found that the proposed N-Bi-AC method successfully converged to a fair Nash Equilibrium, which guarantees the fairness of agents in different matrix game environments.
Read full abstract