Abstract

This article studies the conflicting goals of high-precision tracking and quick convergence speed, which is a longstanding problem in the learning control of stochastic systems. In such systems, a decreasing gain sequence is necessary to ensure the asymptotic convergence of the generated input sequence to a fixed limit. However, the convergence speed is adversely affected by gain sequences of this nature. In this article, we propose a novel multistage learning control strategy to resolve this conflict, where each stage consists of several iterations. The learning gain remains constant in each stage but is reduced at the transition from a given stage to the subsequent stage. The switching iteration between two stages is determined by the tracking performance index of the contracted input error and the accumulated noise drift. Furthermore, an improved mechanism is proposed to optimize the lengths of the different stages. The asymptotic convergence of the input sequence generated by the newly proposed strategy is strictly established by thoroughly analyzing the properties of the proposed gain sequence. Numerical simulations are presented to verify the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call