In stochastic system theory the analysis of a controlled system is typically carried out by assuming some idealized system model, and then establishing the desired conclusions for the ideal system. An underlying assumption is that the results obtained for the model will apply to the true system if the model is good enough. In this paper we apply the theory of Markov processes to attempt to justify this assumptionIt is assumed that a Markov chain Φ evolving on Euclidean space exists, and that the input and output processes appear as functions of Φ. Stochastic systems of this form are commonly found in stochastic adaptive control problems. For example, the controlled systems of [Meyn and Caines, 1987] and [Goodwin, Ramadge, and Caines, 1981] are of this form when the disturbance processes are independent and identically distributed (i.i.d.).It is demonstrated that invariant probabilities on the state process determine the asymptotic behavior of the overall system. The robustness questions of interest are then explored by introducing a notion of convergence for stochastic systems and investigating the behavior of the invariant probabilities corresponding to a convergent sequence of stochastic systemsThese general results are applied to the analysis of linear state space systems under nonlinear feedbackI am indebted to Peter Caines of the Department of Electrical Engineering at McGill University for suggesting that I apply the “Markov state space” theory of this paper and [Meyn and Caines, 1988] to the analysis of linear state space systems under nonlinear feedback. This application forms the second half of this paper
Read full abstract