Abstract

In this paper, a novel reinforcement learning (RL)-based event-triggered (ET) output feedback control algorithm is proposed for a class of uncertain strict-feedback nonlinear discrete-time systems. In contrast to traditional RL-based control methods, we proposed an ET output feedback controller based on the backstepping technique, where the transmission cost can be efficiently conserved. Then, in light of the radial basis function (RBF) neural network (NN), various critic NNs are constructed to approximate the critic functions in each step. Furthermore, with the backing of the proposed ET mechanism, a sampled output feedback controller is addressed to guarantee that the tracking errors and all signals of the closed-loop system are semi-global uniformly ultimately bounded (SGUUB). Finally, a simulation example is presented to demonstrate the effectiveness of the control strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call