Abstract

This article proposes a data-efficient model-free reinforcement learning (RL) algorithm using Koopman operators for complex nonlinear systems. A high-dimensional data-driven optimal control of the nonlinear system is developed by lifting it into the linear system model. We use a data-driven model-based RL framework to derive an off-policy Bellman equation. Building upon this equation, we deduce the data-efficient RL algorithm, which does not need a Koopman-built linear system model. This algorithm preserves dynamic information while reducing the required data for optimal control learning. Numerical and theoretical analyses of the Koopman eigenfunctions for dataset truncation are discussed in the proposed model-free data-efficient RL algorithm. We validate our framework on the excitation control of the power system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.