Abstract

Cooperation among agents with partial observation is an important task in multi-agent reinforcement learning (MARL), aiming to maximize a common reward. Most existing cooperative MARL approaches focus on building different model frameworks, such as centralized, decentralized, and centralized training with decentralized execution. These methods employ partial observation of agents as input directly, but rarely consider the local relationship between agents. The local relationship can help agents integrate observation information among different agents in a local range, and then adopt a more effective cooperation policy. In this paper, we propose a MARL method based on spatial relationship called hierarchical relation graph soft actor-critic (HRG-SAC). The method first uses a hierarchical relation graph generation module to represent the spatial relationship between agents in local space. Second, it integrates feature information of the relation graph through the graph convolution network (GCN). Finally, the soft actor-critic (SAC) is used to optimize agents' actions in training for compliance control. We conduct experiments on the Food Collector task and compare HRG-SAC with three baseline methods. The results demonstrate that the hierarchical relation graph can significantly improve MARL performance in the cooperative task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call