Abstract

AbstractIn this work, we present an optimal cooperative control scheme for a multi‐agent system in an unknown dynamic obstacle environment, based on an improved distributed cooperative reinforcement learning (RL) strategy with a three‐layer collaborative mechanism. The three collaborative layers are collaborative perception layer, collaborative control layer, and collaborative evaluation layer. The incorporation of collaborative perception expands the perception range of a single agent, and improves the early warning ability of the agents for the obstacles. Neural networks (NNs) are employed to approximate the cost function and the optimal controller of each agent, where the NN weight matrices are collaboratively optimized to achieve global optimal performance. The distinction of the proposed control strategy is that cooperation of the agents is embodied not only in the input of NNs (in a collaborative perception layer) but also in their weight updating procedure (in the collaborative evaluation and collaborative control layers). Comparative simulations are carried out to demonstrate the effectiveness and performance of the proposed RL‐based cooperative control scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call