Abstract

In this article, we study the consensus issues of multiagent systems (MASs) without any information of the system model by using the reinforcement learning (RL) method and event-based control strategy. First, we design an adaptive event-based consensus control protocol using the local sampled state information so that the consensus errors of all agents are uniformly ultimately bounded. The validity of the above event-triggered adaptive control protocol is confirmed by excluding the Zeno behavior within finite time. Then, based on the RL approach, we present a model-free algorithm to get the feedback gain matrix, and accomplish constructing the adaptive event-triggered control strategy without the knowledge of model information. Distinct with the existing related works, this RL-based event-triggered adaptive control algorithm only relies on the local sampled state information, irrelevant to any model information or global network information. Finally, we provide some examples to demonstrate the validity of the above adaptive event-based consensus algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call