Abstract
In this paper, a data-driven adaptive optimal control strategy is proposed for a class of linear systems with structured time-varying uncertainty, minimizing the upper bound of a pre-defined cost function while maintaining the closed-loop stability. An off-policy data-driven reinforcement learning algorithm is presented, which uses repeatedly the online state signal on some fixed time intervals without knowing system information, yielding a guaranteed cost control (GCC) law with quadratic stability for the system. This law is further optimized through a particle swarm optimization (PSO) method, the parameters of which are adaptively adjusted by a fuzzy logic mechanism, and an optimal GCC law with the minimum upper bound of the cost function is finally obtained. The effectiveness of this strategy is verified on the dynamic model of a two-degree-of-freedom helicopter, showing that both stability and convergence of the closed-loop system are guaranteed and that the cost is minimized with much less iteration than the conventional PSO method with constant parameters.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.