Abstract

In this chapter, the robust control and optimal guaranteed cost control of continuous-time uncertain nonlinear systems are studied using adaptive dynamic programming (ADP) methods. First, a novel strategy is established to design the robust controller for a class of nonlinear systems with uncertainties based on online policy iteration algorithm. By properly choosing a cost function that reflects the uncertainties, states, and controls, the robust control problem is transformed into an optimal control problem, which is solved under the framework of ADP. Then, the infinite horizon optimal guaranteed cost control of uncertain nonlinear systems is investigated. A critic neural network is constructed to facilitate the solution of the modified Hamilton–Jacobi–Bellman equation corresponding to the nominal system. An additional stabilizing term is introduced to ensure stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov’s direct approach. Simulation examples are provided to verify the effectiveness of the present control approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call