Abstract

Conventionally, as the system’s dynamics is known, the optimal consensus control problem relies on solving the coupled Hamilton–Jacobi–Bellman (HJB) equations. In this paper, with the unknown system dynamics being considered, a local ${Q}$ -function-based adaptive dynamic programming method is put forward to deal with the optimal consensus control problem for unknown discrete-time nonlinear multiagent systems by approximating the solutions of the coupled HJB equations. First, a local ${Q}$ -function is defined, which considers the local consensus error and the actions of the agent and its neighbors. Using the ${Q}$ -function, it is convenient to get the derivatives with regard to the weights of the consensus control policies, even without the model of system dynamics. Then, with the defined local ${Q}$ -function, a distributed policy iteration technique is developed, which is theoretically proved to be convergent to the solutions of the coupled HJB equations. An actor–critic neural network framework for implementing the developed model-free optimal consensus control method is constructed to approximate the local ${Q}$ -functions and the control policies. Finally, the feasibility and effectiveness of the developed method are verified by a series of simulations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.