In this paper, a novel reinforcement learning (RL)-based adaptive event-triggered control problem is studied for non-affine multi-agent systems (MASs) with time-varying dead-zone. The purpose is to design an efficient event-triggered mechanism to achieve optimal control of MASs. Compared with the existing results, an improved smooth event-triggered mechanism is proposed, which not only overcomes the design difficulties caused by discontinuous trigger signals, but also reduces the waste of communication resources. In order to achieve optimal event-triggered control, RL algorithm of the identifier-critic-actor structure based on fuzzy logic systems (FLSs) is applied to estimate system dynamics, evaluate control performance, and execute control behavior, respectively. In addition, considering time-varying dead-zone in non-affine MASs brings obstacles to controller design, which makes system applications more generalized. Through Lyapunov theory, it is proved that the optimal control performance can be achieved and the tracking error converges to a small neighborhood of the origin. Finally, simulation proves the feasibility of the proposed method.
Read full abstract