Abstract

The rapid development of deep reinforcement learning makes it widely used in multi-agent environments to solve the multi-agent cooperation problem. However, due to the instability of multi-agent environments, the performance is insufficient when using deep reinforcement learning algorithms to train each agent independently. In this work, we use the framework of centralized training with decentralized execution to extend the maximum entropy deep reinforcement learning algorithm Soft Actor-Critic (SAC) and proposes the multi-agent deep reinforcement learning algorithm MASAC based on the maximum entropy framework. Proposed model treats all the agents as part of the environment, it can effectively solve the problem of poor convergence of algorithms due to environmental instability. At the same time, we have noticed the shortcoming of centralized training, using all the information of the agents as input of critics, and it is easy to lose the information related to the current agent. Inspired by the application of self-attention mechanism in machine translation, we use the self-attention mechanism to improve the critic and propose the ATT-MASAC algorithm. Each agent can discover their relationship with other agents through encoder operation and attention calculation as part of the critic networks. Compared with the recent multi-agent deep reinforcement learning algorithms, ATT-MASAC has better convergence effect. Also, it has better stability when the number of agents in the environment increases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call