Abstract

While fixed topology formation control with a cen-tralized controller has been studied for multi-agent systems, it remains challenging to develop robust distributed control policies that can achieve a flexible formation without a global coordinate system. In this paper, we design a fully decentralized displacement-based formation control policy for multi-agent sys-tems, which can achieve any formation after one-time training. In particular, we use a model-free multi-agent reinforcement learning (MARL) approach to obtain such a policy in the centralized training process. The Hausdorff distance is adopted in the reward function for measuring the distance between the current and target topology. The feasibility of our method is verified by both simulation and implementation on omni-directional vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call