Abstract

This paper presents a data-based robust adaptive control methodology for a class of nonlinear constrained-input systems with completely unknown dynamics. By introducing a value function for the nominal system, the robust control problem is transformed into a constrained optimal control problem. Due to the unavailability of system dynamics, a data-based integral reinforcement learning (RL) algorithm is developed to solve the constrained optimal control problem. Based on the present algorithm, the value function and the control policy can be updated simultaneously using only system data. The convergence of the developed algorithm is proved via an established equivalence relationship. To implement the integral RL algorithm, an actor neural network (NN) and a critic NN are separately utilized to approximate the control policy and the value function, and the least squares method is employed to estimate the unknown parameters. By using Lyapunov’s direct method, the obtained approximate optimal control is verified to guarantee the unknown nonlinear system to be stable in the sense of uniform ultimate boundedness. Two examples are provided to demonstrate the effectiveness and applicability of the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call