Abstract

In this paper, the optimal tracking control for large-scale multi-agent systems (MAS) under constraints has been investigated. The Mean Field Game (MFG) theory is an emerging technique to solve the “curse of dimensionality” problem in large-scale multi-agent decision-making problems. Specifically, the MFG theory can calculate the optimal strategy based on one unified fix-dimension probability density function (PDF) instead of the high-dimensional large-scale MAS information collected from all the individual agents. However, the MFG theory has stringent limitations by assuming all the agents operate in a predefined unlimited space, which is often too ideal for practical applications due to complex environments. In this paper, the original MFG theory has been extended by considering two practical state constraints caused by the environment, i.e., boundary and density constraints. Moreover, to solve the extended MFG type control online, the actor-critic reinforcement learning mechanism is utilized and further extended to a novel actor-critic-mass (ACM) algorithm. Finally, a series of numerical simulations are conducted to demonstrate the effectiveness of the developed schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call