Abstract

Herein, a novel adaptive dynamic programming (ADP) algorithm is developed to solve the optimal tracking control problem of discrete-time multi-agent systems. Compared to the classical policy iteration ADP algorithm with two components, policy evaluation, and policy improvement, a two-stage policy iteration algorithm is proposed to obtain the iterative control laws and the iterative performance index functions. The proposed algorithm contains a sub-iteration procedure to calculate the iterative performance index functions at the policy evaluation. The convergence proof for the iterative performance index functions and the iterative control laws are provided. Subsequently, the stability of the closed-loop error system is also provided. Further, an actor-critic neural network (NN) is used to approximate both the iterative control laws and the iterative performance index functions. The actor-critic NN can implement the developed algorithm online without knowledge of the system dynamics. Finally, simulation results are provided to illustrate the performance of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call