Abstract
In the realm of fully cooperative multi-agent reinforcement learning (MARL), effective communication can induce implicit cooperation among agents and improve overall performance. In current communication strategies, agents are allowed to exchange local observations or latent embeddings, which can augment individual local policy inputs and mitigate uncertainty in local decision-making processes. Unfortunately, in previous communication schemes, agents may potentially receive irrelevant information, which increases training difficulty and leads to poor performance in complex settings. Furthermore, most existing works lack the consideration of the impact of small coalitions formed by agents in the multi-agent system. To address these challenges, we propose HyperComm, a novel framework that uses the hypergraph to model the multi-agent system, improving the accuracy and specificity of communication among agents. Our approach brings the concept of hypergraph for the first time in multi-agent communication for MARL. Within this framework, each agent can communicate more effectively with other agents within the same hyperedge, leading to better cooperation in environments with multiple agents. Compared to those state-of-the-art communication-based approaches, HyperComm demonstrates remarkable performance in scenarios involving a large number of agents.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.