Abstract

This article studies the conflicting goals of high-precision tracking and quick convergence speed, which is a longstanding problem in the learning control of stochastic systems. In such systems, a decreasing gain sequence is necessary to ensure the asymptotic convergence of the generated input sequence to a fixed limit. However, the convergence speed is adversely affected by gain sequences of this nature. In this article, we propose a novel multistage learning control strategy to resolve this conflict, where each stage consists of several iterations. The learning gain remains constant in each stage but is reduced at the transition from a given stage to the subsequent stage. The switching iteration between two stages is determined by the tracking performance index of the contracted input error and the accumulated noise drift. Furthermore, an improved mechanism is proposed to optimize the lengths of the different stages. The asymptotic convergence of the input sequence generated by the newly proposed strategy is strictly established by thoroughly analyzing the properties of the proposed gain sequence. Numerical simulations are presented to verify the theoretical results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.