Abstract

This paper investigates optimal coordination tracking control for nonlinear multi-agent systems (NMASs) with unknown internal states by using an adaptive dynamic programing (ADP) method. Actually, the optimal coordination control for MASs depends on the solutions to the coupled Hamilton–Jacobi–Bellman (HJB) equations which are almost impossible to be solved analytically. And what’s worse is that the accurate system models are either infeasible or difficult to obtain in practical applications. To surmount these deficiencies, a neural network (NN) based observer is designed for each agent to reconstruct its internal states by utilizing the measurable input–output data rather than accurate system models. Based on the observed states and Bellman optimality principle, we derive optimal coordination control policies from the coupled HJB equations. In order to implement the proposed ADP method, a critic network framework is proposed for each agent to approximate its value function and help calculate the optimal coordination control policy. Then we prove the local coordination tracking errors and weight estimation errors are uniformly ultimately bounded (UUB) while the approximated control policies converge to their target values. Finally, two simulation examples are given to show the effectiveness of the proposed ADP method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call