Abstract

With the rapid development of mobile robots, they have begun to be widely used in industrial manufacturing, logistics scheduling, intelligent medical, and other fields. For large-scale task space, the communication between multiagents is the key to affect cooperation productivity, and agents can coordinate more effectively with the help of dynamic communication. However, the traditional communication mechanism uses simple message aggregation and broadcast and, in some cases, lacks the distinction of the importance of information. Multiagent deep reinforcement learning (MDRL) is valid to solve the problem of informational coordination strategies. However, how different messages affect each agent’s decision-making process remains a challenging task for large-scale task. To solve this problem, we propose IMANet (Import Message Attention Network). It divides the decision-making process into two substages: communication and action, where communication is considered to be part of the environment. First, an attention mechanism based on query vectors is introduced. The correlation between the query vector agent’s own information and the current state information of other agents is estimated, and then, the results are used to distinguish the importance of information from other agents. Second, the LSTM network is used as the unit controller for each agent, and individual rewards are used to guide the agent training after communication. Finally, IMANet is evaluated on tasks on challenging multi-agent platforms, Predator and Prey (PP), and traffic junction. The results show that IMANet can improve the efficiency of learning and training, especially when applied to large-scale task space, with a success rate 12% higher than CommNet in baseline experiments.

Highlights

  • Multiagent system is very practical in distributed control, remote scheduling, and modeling analysis [1]

  • We propose the IMANet method for multiagent deep reinforcement learning

  • The following are the three architectures of multiagent reinforcement learning: (1) Decentralization: without a central controller, the agent makes independent decisions based on its own policy network

Read more

Summary

Introduction

Multiagent system is very practical in distributed control, remote scheduling, and modeling analysis [1]. Broadcast communication is a common setting for the study of “learning communications” between agents, but this does not allow selective attention to the observations and actions of other agents, does not provide useful information to agents in the decision-making process, and leads to unstable learning processes. These problems are caused by the inability of traditional reinforcement learning methods to learn cooperative strategies through effective communication in DECPOMDP conditions [15]. Our independent control model selectively outputs important information, alleviating the problems associated with dimensional explosion, which makes it possible for agents to learn coordination strategies in large-scale spaces

Notation and Background
Related Work
IMANet Method
Experimental Setup
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call