Abstract

Developing efficient motion policies for multiagents is a challenge in a decentralized dynamic situation, where each agent plans its own paths without knowing the policies of the other agents involved. This paper presents an efficient learning-based motion planning method for multiagent systems. It adopts the framework of multiagent deep deterministic policy gradient (MADDPG) to directly map partially observed information to motion commands for multiple agents. To improve the efficiency of MADDPG in sample utilization, so as to train more brilliant agents that can adapt to more complex environments, a strategy named mixed experience (ME) is introduced to MADDPG, and this has led to our proposed ME-MADDPG algorithm. The novel ME strategy can be embodied into three specific mechanisms: (1) an artificial potential field-based sample generator to produce high-quality samples in the early training stage; (2) a dynamic mixed sampling strategy to mix the training data from different sources with a variable proportion; (3) a delayed learning skill to stabilize the training of the multiple agents. A series of experiments have been conducted to verify the performance of the proposed ME-MADDPG algorithm, and it has been demonstrated that, compared with MADDPG, the proposed algorithm can significantly improve the convergence speed and convergence effect in the training process, and it has also shown better efficiency and better adaptability in complex dynamic environments while it is used for multiagent motion planning applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call