Abstract

Introduction. This is a sequel to my paper [1]. The present developments are largely independent of the previous results except in so far as given in the Appendix. Theorem 1 shows a kind of solidarity among the states of a recurrent class; it generalizes a classical result due to Kolmogorov and permits a classification of recurrent states and classes. In ?2 some relations involving the mean recurrence and first passage times are given. In ??3-5 sequences of random variables associated in a natural way with a Markov chain are studied. Theorem 2 is a generalized ergodic theorem which applies to any recurrent class, positive or null. It turns out that in a null class there is a set of numbers which plays the role of stationary absolute probabilities. In the case of a recurrent random walk with independent, stationary steps these numbers are all equal to one and the result is particularly simple. Theorem 3 shows that the kind of solidarity exhibited in Theorem 1 persists in such a sequence; it leads to the clarification of certain conditions stated by Doblin(2) in connection with his central limit theorem. Using a fundamental idea due to Doblin, the weak and strong laws of large numbers, the central limit theorem, the law of the iterated logarithm, and the limit theorems for the maxima of the associated sequence are proved very simply. Owing to the great simplicity of the method it is the conditions of validity of these limit theorems that should deserve attention. Among other things, we shall show by an example that a certain set of conditions, attributed to Kolmogorov, is in reality not sufficient for the validity of the central limit theorem. Furthermore, conditions of validity for the strong limit theorems and the limit theorems for the maxima are obtained by a rather natural strengthening of corresponding conditions for the weak limit theorems. A word about the connection of these conditions with martingale theory closes the paper. 1. The sequence of random variables {Xn}, n=0, 1, 2, * , forms a denumerable Markov chain with stationary transition probabilities. The states will be denoted by the non-negative integers(3) 0, 1, 2, * * . The n-step transition probability from the state i to the state j will be denoted by P(n) (P(l) = Pij). Thus we have

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.