Abstract

In this paper, we present a novel approach to address the event-triggered optimized consensus tracking control problem in a class of uncertain nonlinear multi-agent systems (MASs). To optimize control performance, we employ an adaptive reinforcement learning (RL) algorithm based on the actor-critic architecture and utilize the backstepping method. The proposed RL-based optimized controller employs a novel event-triggered strategy, dynamically adjusting sampling errors online to reduce communication resource usage and computational complexity through the intermittent transmission of state signals. We establish the boundedness of all signals in the closed-loop MAS through stability analysis using the Lyapunov method, and demonstrate the prevention of Zeno behavior. Numerical simulations of a practical multi-electromechanical system are provided to validate the effectiveness of the proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call