Abstract

This paper presents a data-based robust adaptive control methodology for a class of nonlinear constrained-input systems with completely unknown dynamics. By introducing a value function for the nominal system, the robust control problem is transformed into a constrained optimal control problem. Due to the unavailability of system dynamics, a data-based integral reinforcement learning (RL) algorithm is developed to solve the constrained optimal control problem. Based on the present algorithm, the value function and the control policy can be updated simultaneously using only system data. The convergence of the developed algorithm is proved via an established equivalence relationship. To implement the integral RL algorithm, an actor neural network (NN) and a critic NN are separately utilized to approximate the control policy and the value function, and the least squares method is employed to estimate the unknown parameters. By using Lyapunov’s direct method, the obtained approximate optimal control is verified to guarantee the unknown nonlinear system to be stable in the sense of uniform ultimate boundedness. Two examples are provided to demonstrate the effectiveness and applicability of the theoretical results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.