Abstract

Markov chain Monte Carlo algorithms are used to simulate from complex statistical distributions by way of a local exploration of these distributions. This local feature avoids heavy requests on understanding the nature of the target, but it also potentially induces a lengthy exploration of this target, with a requirement on the number of simulations that grows with the dimension of the problem and with the complexity of the data behind it. Several techniques are available toward accelerating the convergence of these Monte Carlo algorithms, either at the exploration level (as in tempering, Hamiltonian Monte Carlo and partly deterministic methods) or at the exploitation level (with Rao–Blackwellization and scalable methods).This article is categorized under: Statistical and Graphical Methods of Data Analysis > Markov Chain Monte Carlo (MCMC)Algorithms and Computational Methods > AlgorithmsStatistical and Graphical Methods of Data Analysis > Monte Carlo Methods

Highlights

  • Markov chain Monte Carlo (MCMC) algorithms have been used for nearly 60 years, becoming a reference method for analysing Bayesian complex models in the early 1990’s (Gelfand and Smith, 1990)

  • MCMC methods have a history that starts at approximately the same time as the Monte Carlo methods, in conjunction with the conception of the first computers

  • While there is no end in trying to construct more efficient and faster MCMC algorithms, and while this goal needs to account for the cost of devising such alternatives under limited resources budgets, there exist several generic solutions such that a given target can first be explored in terms of the geometry of the density before constructing the algorithm

Read more

Summary

INTRODUCTION

Markov chain Monte Carlo (MCMC) algorithms have been used for nearly 60 years, becoming a reference method for analysing Bayesian complex models in the early 1990’s (Gelfand and Smith, 1990). MCMC algorithms are robust or universal, as opposed to the most standard Monte Carlo methods (see, e.g., Rubinstein, 1981; Robert and Casella, 2004) that require direct simulations from the target distribution. This robustness may induce a slow convergence behaviour in that the exploration of the relevant space—meaning the part of the space supporting the distribution that has a significant probability mass under that distribution— may take a long while, as the simulation usually proceeds by local jumps in the vicinity of the current position. The following sections provide more details about these directions and the solutions proposed in the literature

WHAT IS MCMC AND WHY DOES IT NEED ACCELERATING?
ACCELERATING MCMC BY EXPLOITING THE GEOMETRY OF THE TARGET
Hamiltonian Monte Carlo
ACCELERATING MCMC BY BREAKING THE PROBLEM INTO PIECES
Parallelisation and distributed schemes
ACCELERATING MCMC BY IMPROVING THE PROPOSAL
Adaptive MCMC
Multiple proposals and parameterisations
ACCELERATING MCMC BY REDUCING THE VARIANCE
Rao–Blackwellisation and other averaging techniques
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.