Abstract

High-dimensional limit theorems have been shown useful to derive tuning rules for finding the optimal scaling in random walk Metropolis algorithms. The assumptions under which weak convergence results are proved are, however, restrictive: the target density is typically assumed to be of a product form. Users may thus doubt the validity of such tuning rules in practical applications. In this paper, we shed some light on optimal scaling problems from a different perspective, namely a large-sample one. This allows to prove weak convergence results under realistic assumptions and to propose novel parameter-dimension-dependent tuning guidelines. The proposed guidelines are consistent with the previous ones when the target density is close to having a product form, and the results highlight that the correlation structure has to be accounted for to avoid performance deterioration if that is not the case, while justifying the use of a natural (asymptotically exact) approximation to the correlation matrix that can be employed for the very first algorithm run.

Highlights

  • 1.1 Random walk Metropolis algorithmsConsider a Bayesian statistical framework where one wants to sample from an intractable posterior distribution π to perform inference

  • We have analysed the behaviour of random walk Metropolis (RWM) algorithms when used to sample from Bayesian posterior distributions, under the asymptotic regime n → ∞, in contrast with previous asymptotic analyses where d → ∞

  • 28 Page 10 of 16 to those performed in this paper can be conducted to develop practical tuning guidelines for more sophisticated algorithms like Metropolis-adjusted Langevin algorithm (Roberts and Tweedie 1996) and Hamiltonian Monte Carlo (Duane et al 1987), and to establish other interesting connections with optimal scaling literature (e.g. Roberts and Rosenthal 1998; Beskos et al 2013)

Read more

Summary

Random walk Metropolis algorithms

Consider a Bayesian statistical framework where one wants to sample from an intractable posterior distribution π to perform inference. This posterior distribution, called the target distribution in a sampling context, is considered here to be that of model parameters θ ∈ Θ = Rd , given a data sample of size n. We assume that π has a probability density function (PDF) with respect to the Lebesgue measure; to simplify, we will use π to denote this density function. Tools called random walk Metropolis (RWM) algorithms (Metropolis et al 1953), which are Markov chain Monte Carlo (MCMC) methods, can be employed to sample. If the proposal is rejected, the chain remains at the same state

Optimal scaling problems
28 Page 2 of 16
Contributions
Large-sample asymptotics of RWM
Notation and framework
Tuning guidelines and analysis of the limiting RWM
Tuning guidelines
Analysis of the limiting RWM
28 Page 6 of 16
Connection to scaling limits
28 Page 8 of 16
Logistic regression with real data
Discussion
A Proofs
28 Page 16 of 16
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call