Abstract

In this article, a dynamic event-triggered stochastic adaptive dynamic programming (ADP)-based problem is investigated for nonlinear systems with a communication network. First, a novel condition of obtaining stochastic input-to-state stability (SISS) of discrete version is skillfully established. Then, the event-triggered control strategy is devised, and a near-optimal control policy is designed using an identifier-actor-critic neural networks (NNs) with an event-sampled state vector. Above all, an adaptive static event sampling condition is designed by using the Lyapunov technique to ensure ultimate boundedness (UB) for the closed-loop system. However, since the static event-triggered rule only depends on the current state, regardless of previous values, this article presents an explicit dynamic event-triggered rule. Furthermore, we prove that the lower bound of sampling interval for the proposed dynamic event-triggered control strategy is greater than one, which avoids the so-called triviality phenomenon. Finally, the effectiveness of the proposed near-optimal control pattern is verified by a simulation example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call