Abstract
We study a class of adaptive Markov Chain Monte Carlo (MCMC) processes which aim at behaving as an ``optimal'' target process via a learning procedure. We show, under appropriate conditions, that the adaptive MCMC chain and the ``optimal'' (nonadaptive) MCMC process share many asymptotic properties. The special case of adaptive MCMC algorithms governed by stochastic approximation is considered in details and we apply our results to the adaptive Metropolis algorithm of [Haario, Saksman, Tamminen].
Highlights
Markov chain Monte Carlo (MCMC) is a popular computational method for generating samples from virtually any distribution π defined on a space X
The method consists of simulating an ergodic Markov chain {Xn, n ≥ 0} on X with transition probability P such that π is a stationary distribution for this chain
This paper addresses the problem of efficiency of adaptive MCMC
Summary
Markov chain Monte Carlo (MCMC) is a popular computational method for generating samples from virtually any distribution π defined on a space X. We pay particular attention to the case where {θn} is constructed through a stochastic approximation recursion: most existing adaptive MCMC algorithms rely on this mechanism (Haario et al (2001), Andrieu and Moulines (2006), Atchade and Rosenthal (2005)). In particular we derive some verifiable conditions that ensure the mean square convergence of θn to a unique limit point θ∗ and prove a bound on this rate of convergence (Theorem 3.1) These results apply for example to the adaptive Metropolis algorithm of Haario et al. Electronic Communications in Probability (2001) and show that the stochastic process generated by this algorithm is asymptotically stationary in the weak convergence sense with a limit distribution that is (almost) optimal. We apply our results to the adaptive Metropolis algorithm of Haario et al (2001) (Proposition 3.1)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have