Abstract

In this article, an integral reinforcement learning-based dynamic event-triggered safety control scheme is proposed to tackle the multiplayer Stackelberg–Nash games (MSNG) problem in continuous-time nonlinear systems with time-varying state constraints. Initially, a new barrier function (BF) is introduced by integrating traditional barrier functions with a novel smooth function to address the challenge of time-varying state constraints. Then, the constrained MSNG system is transformed into an unconstrained system through the application of the state transformation technique, which helps to characterize hierarchical decision problems as MSNG problems with the leader and followers. Meanwhile, the integral reinforcement learning (IRL) technique is also applied to ease the demand for precise system dynamics. Moreover, a new dynamic event-triggered control (DETC) mechanism is designed, resulting in coupled dynamic event-triggered Hamilton–Jacobi (HJ) equations. A single critic neural network (NN) is constructed to learn the optimal control laws for the leader and followers. By employing Lyapunov theory, the integral reinforcement learning-based dynamic event-triggered safety control method ensures that the system state and the critic NN weight errors are uniformly ultimately bounded (UUB). Finally, two simulation examples are provided to demonstrate the effectiveness of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call