Abstract

This article presents a novel event-driven guaranteed cost control method for nonlinear systems subject to actuator faults. For the purpose of handling the problem of actuator faults and obtaining the event-driven approximate optimal guaranteed cost control approach for general nonlinear dynamics, the reinforcement learning (RL) algorithm is utilized to develop a sliding-mode control (SMC) strategy. To begin with, the unknown faults can be estimated by designing a fault observer. Meanwhile, an SMC technique is presented aiming at countering the effect of abrupt faults. In addition, the optimal performance of the equivalent sliding mode dynamics is considered, then an event-driven guaranteed cost control mechanism is implemented by using RL principle. In the control process, a general cost function, which has a simpler structure, is given to reduce the computation complexity. At the same time, a modified cost function is approximated to obtain optimal guaranteed cost control by using a single critic neural network (NN). In addition, a modified weight update law for critic NN is presented to relax the persistence of excitation (PE) condition. Moreover, a newly triggering condition, which is easy to be implemented, is designed, and the critic NN update law makes sure that the system states are stable. Furthermore, in light of the Lyapunov analysis, it is demonstrated that the developed event-driven control method guarantees the uniformly ultimately bounded (UUB) property of all the signals. Finally, three simulation results are given to validate the designed control method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call