Abstract

The principal techniques used up to now for the analysis of stochastic adaptive control systems have been (i) super-martingale (often called stochastic Lyapunov) methods and (ii) methods relying upon the strong consistency of some parameter estimation scheme. Optimal stochastic control and filtering methods have also been employed. Although there have been some successes, the extension of these techniques to a broad class of adaptive control problems, including the case of time varying parameters, has been difficult. In this paper a new approach is adopted: if an underlying Markovian state space system for the controlled process is available, and if this process possesses stationary transition probabilities, then the powerful ergodic theory of Markov processes may be applied. Subject to technical conditions one may deduce (amongst other facts) (i) the existence of an invariant measure μ∞ for the process and (ii) the convergence almost surely of the sample averages of a function of the state process (and of its expectation) to its conditional expectation [μ∞] with respect to a sub-σ-field of invariant sets ΣI. The technique is illustrated by an application to a previously unsolved problem involving a linear system with unbounded random time varying parameters. Work supported by Canada NSERC Grant No.: 1329 and a UK SERC Visiting Research Fellowship.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.