Abstract

In this paper, an event-triggered control scheme with periodic characteristic is developed for nonlinear discrete-time systems under an actor–critic architecture of reinforcement learning (RL). The periodic event-triggered mechanism (ETM) is constructed to decide whether the sampling data are delivered to controllers or not. Meanwhile, the controller is updated only when the event-triggered condition deviates from a prescribed threshold. Compared with traditional continuous ETMs, the proposed periodic ETM can guarantee a minimal lower bound of the inter-event intervals and avoid sampling calculation point-to-point, which means that the partial communication resources can be efficiently economized. The critic and actor neural networks (NNs), consisting of radial basis function neural networks (RBFNNs), aim to approximate the unknown long-term performance index function and the ideal event-triggered controller, respectively. A rigorous stability analysis based on the Lyapunov difference method is provided to substantiate that the closed-loop system can be stabilized. All error signals of the closed-loop system are uniformly ultimately bounded (UUB) under the guidance of the proposed control scheme. Finally, two simulation examples are given to validate the effectiveness of the control design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call