Abstract
In this paper, a novel reinforcement learning (RL)-based adaptive event-triggered control problem is studied for non-affine multi-agent systems (MASs) with time-varying dead-zone. The purpose is to design an efficient event-triggered mechanism to achieve optimal control of MASs. Compared with the existing results, an improved smooth event-triggered mechanism is proposed, which not only overcomes the design difficulties caused by discontinuous trigger signals, but also reduces the waste of communication resources. In order to achieve optimal event-triggered control, RL algorithm of the identifier-critic-actor structure based on fuzzy logic systems (FLSs) is applied to estimate system dynamics, evaluate control performance, and execute control behavior, respectively. In addition, considering time-varying dead-zone in non-affine MASs brings obstacles to controller design, which makes system applications more generalized. Through Lyapunov theory, it is proved that the optimal control performance can be achieved and the tracking error converges to a small neighborhood of the origin. Finally, simulation proves the feasibility of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.