Abstract

The Adaptive/Approximate Dynamic Programming (ADP) is an online design approach proposed to make possible the implementations in real time of optimal controllers based on Hamilton-Jacobi-Bellman (HJB) equation solution. In this paper, ADP schemes are presented in a Heuristic Dynamic Programming (HDP) framework, where Policy Iteration (PI) strategies in conjunction with Recursive Least Squares (RLS) methods are oriented to solve online the Riccati-type HJB equation associated with the Discrete Linear Quadratic Regulator (DLQR) problem. However, these schemes have a reasonable numerical complexity and, furthermore, numerical instability may be caused due to the covariance matrix illconditioning of the RLS approach. Thus, in order to improve numerical stability, as well as to reduce the computational effort spent on approximating the DLQR cost function, UDUT factorization and orthogonal decomposition methods, such as QR decomposition, are incorporated into the standard PI-HDP framework. The performance of the standard PI-HDP method and its variants are compared in terms of numerical stability and computational cost. It is shown that such variants lead to significant computational performance improvements when compared to the standard PI-HDP method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.