Abstract

In this paper, we propose novel data-driven optimal dynamic controller design frameworks, via both state-feedback and output-feedback, for solving optimal output regulation problems of linear discrete-time systems subject to unknown dynamics and unmeasurable disturbances using reinforcement learning (RL). Fundamentally different from existing research on optimal output regulation problems and RL, the proposed procedures can determine both the optimal control gain and the optimal dynamic compensator simultaneously instead of presetting a non-optimal dynamic compensator. Moreover, we present incremental dataset-based RL algorithms to learn the optimal dynamic controllers that do not require the measurements of the external disturbance and the exostate during learning, which is of great practical importance. Besides, we show that the proposed incremental dataset-based learning methods are more robust to a class of measurement noises with arbitrary magnitudes than routine RL algorithms. Comprehensive simulation results validate the efficacy of our methodologies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.