Abstract

AbstractTraditional multi-agent deep reinforcement learning has difficulty obtaining rewards, slow convergence, and effective cooperation among agents in the pretraining period due to the large joint state space and sparse rewards for action. Therefore, this paper discusses the role of demonstration data in multiagent systems and proposes a multi-agent deep reinforcement learning algorithm from fuse adaptive weight fusion demonstration data. The algorithm sets the weights according to the performance and uses the importance sampling method to bridge the deviation in the mixed sampled data to combine the expert data obtained in the simulation environment with the distributed multi-agent reinforcement learning algorithm to solve the difficult problem. The problem of global exploration improves the convergence speed of the algorithm. The results in the RoboCup2D soccer simulation environment show that the algorithm improves the ability of the agent to hold and shoot the ball, enabling the agent to achieve a higher goal scoring rate and convergence speed relative to demonstration policies and mainstream multi-agent reinforcement learning algorithms.KeywordsMultiagent deep reinforcement learningExplorationOffline reinforcement learningImportance sampling

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call