Abstract

This research paper focuses on addressing the challenge of infinite-time linear quadratic tracking control (LQT) for linear systems with parametric uncertainty. Traditional solutions to the LQT problem often involve using a discount factor to prevent the cost function from growing unbounded over time. However, this approach can introduce instability in the closed-loop system. To overcome this issue, this paper proposes an alternative approach using an undiscounted cost function that ensures the asymptotic stability of the uncertain closed-loop system. To design a control scheme without requiring precise knowledge of the system dynamics, reinforcement learning (RL) algorithms are employed. However, for systems with uncertain parameters that may lead to instability, the convergence of RL algorithms to a stabilising solution is not guaranteed. To address this limitation, a robust optimal control structure is developed using on-policy and off-policy reinforcement learning algorithms, resulting in a model-free controller. The effectiveness of the proposed robust optimal controller is validated through comparative simulations on an uncertain model of a DC–DC buck converter connected to a constant power load. These simulations demonstrate the advantages and benefits of the robust optimal controller in handling parametric uncertainty and ensuring stability in the control system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.