Abstract

Bounds on convergence rates for Markov chains are a very widely-studied topic, motivated largely by applications to Markov chain Monte Carlo algorithms. For Markov chains on finite state spaces, previous authors have obtained a number of very useful bounds, including those which involve choices of paths. Unfortunately, many Markov chains which arise in practice are not finite. In this paper, we consider the extent to which bounds for finite Markov chains can be extended to infinite chains. Our results take two forms. For countably-infinite state spaces X, we consider the process of enlargements of Markov chains, namely considering Markov chains on finite state spaces X 1,X 2, … whose union is X. Bounds for the Markov chains restricted to X d, if uniform in d, immediately imply bounds on X. Results for finite Markov chains, involving choices of paths, can then be applied to countable chains. We develop these ideas and apply them to several examples of the Metropolis-Hastings algorithm on countable state spaces. For uncountable state spaces, we consider the process of refinements of Markov chains. Namely, we break the original state space X into countable numbers of smaller and smaller pieces, and define a Markov chain on these finite pieces which approximates the original chain. Under certain continuity assumptions, bounds on the countable Markov chains, including those related to choices of paths, will imply bounds on the original chain. We develop these ideas and apply them to an example of an uncountable state space Metropolis algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call