Abstract

In this chapter, iterative adaptive dynamic programming (ADP) algorithms are developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. idea is to use iterative ADP algorithms to obtain the iterative control laws that guarantee the iterative value functions to reach the optimums. Then, the numerical optimal control problems are solved by an adaptive learning control scheme based on ADP algorithm. Stability properties of the system under the numerical iterative controls are proved which allow the present iterative ADP algorithm to be implemented both online and offline. Moreover, a general value iteration (GVI) algorithm with finite approximation errors is developed to guarantee the iterative value function to converge to the solution of the Bellman equation. The GVI algorithm permits an arbitrary positive semidefinite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. Simulation examples are also included to demonstrate the effectiveness of the present control strategies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.