Abstract

This paper deals with the robust control issues for a class of uncertain nonlinear systems with completely unknown dynamics via a data-driven reinforcement learning method. Firstly, we formulate the optimal regulation control problem for the nominal system, and then, the robust controller for the original uncertain system is designed by adding a constant feedback gain to the optimal controller of the nominal system. Then, this scheme is extended to the optimal tracking control by means of augmented system and discount factor. It is also demonstrated that the proposed robust controller can achieve optimality with a new defined performance index function when there is no control perturbation. It is well known that the nonlinear optimal control problem relies on the solution of Hamilton–Jacobi–Bellman (HJB) equation, which is a nonlinear partial differential equation and impossible to be solved analytically. In order to overcome this difficulty, we introduce a model-based iterative learning algorithm to successively approximate the solution of HJB equation and provide its convergence proof. Subsequently, based on the structure of the model-based approach, a data-driven reinforcement learning method is derived, which only requires the sampling data from real system with different control inputs rather than the accurate mathematical system models. Neural networks (NNs) are utilized to implement this model-free method to approximate the optimal solutions and the least-square approach is employed to minimize the NN approximation residual errors. Finally, two numerical simulation examples are given to illustrate the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call