Abstract

This article is concerned with the optimal synchronization problem for discrete-time nonlinear heterogeneous multiagent systems (MASs) with an active leader. To overcome the difficulty in the derivation of the optimal control protocols for these systems, we develop an observer-based adaptive synchronization control approach, including the designs of a distributed observer and a distributed model reference adaptive controller with no prior knowledge of all agents' dynamics. To begin with, for the purpose of estimating the state of a nonlinear active leader for each follower, an adaptive neural network distributed observer is designed. Such an observer serves as a reference model in the distributed model reference adaptive control (MRAC). Then, a reinforcement learning-based distributed MRAC algorithm is presented to make every follower track its corresponding reference model on behavior in real time. In this algorithm, a distributed actor-critic network is employed to approximate the optimal distributed control protocols and the cost function. Through convergence analysis, the overall observer estimation error, the model reference tracking error, and the weight estimation errors are proved to be uniformly ultimately bounded. The developed approach further achieves the synchronization by means of synthesizing these results. The effectiveness of the developed approach is verified through a numerical example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call