Abstract

This paper addresses the decentralized fault tolerant control problem for interconnected nonlinear systems under a reinforcement learning strategy. The system under consideration includes unknown actuator faults and asymmetric input constraints. By constructing an improved cost function related to the estimation of the actuator faults for each auxiliary subsystem, the original control issue is converted into finding an array of decentralized optimal control policies. Then, we prove that these optimal control policies can ensure the entire system to be stable in the sense of uniform ultimate boundedness. Moreover, a single critic network architecture is developed to acquire the solutions of Hamilton–Jacobi–Bellman equations, which simplifies the architecture of the reinforcement learning algorithm. All signals in the closed-loop auxiliary subsystems are uniformly ultimately bounded based on the Lyapunov theory, and numerical and practical simulation examples are conducted to validate the effectiveness of the designed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call