Abstract

This work is concerned with the application of reinforcement learning (RL) techniques to adaptive dynamic programming (ADP) for systems with partly unknown models. In ADP, one seeks to approximate an optimal infinite horizon cost function, the value function. Such an approximation, i.e., critic, does not in general yield a stabilizing control policies, i.e., stabilizing actors. Guaranteeing stability of nonlinear systems under RL/ADP is still an open issue. In this work, it is suggested to use a stability constraint directly in the actor-critic structure. The system model considered in this work is assumed to be only partially known, specifically, it contains an unknown parameter vector. A suitable stabilizability assumption for such systems is an adaptive Lyapunov function, which is commonly assumed in adaptive control. The current approach formulates a stability constraint based on an adaptive Lyapunov function to ensure closed-loop stability. Convergence of the actor and critic parameters in a suitable sense is shown. A case study demonstrates how the suggested algorithm preserves closed-loop stability, while at the same time improving an infinite-horizon performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.