Abstract

We describe parallel Markov chain Monte Carlo methods that propagate a collective ensemble of paths, with local covariance information calculated from neighbouring replicas. The use of collective dynamics eliminates multiplicative noise and stabilizes the dynamics, thus providing a practical approach to difficult anisotropic sampling problems in high dimensions. Numerical experiments with model problems demonstrate that dramatic potential speedups, compared to various alternative schemes, are attainable.

Highlights

  • A popular family of methods for Bayesian parameterization in data analytics are derived as Markov chain Monte Carlo (MCMC) methods, including Hamiltonian Monte Carlo (HMC)(Duane et al 1987; Neal 2011; Monnahan et al 2016), or the Metropolis adjusted Langevin algorithm (MALA)(Rossky et al 1978; Bou-Rabee and Vanden-Eijnden 2010; Roberts and Tweedie 1996)

  • Our approach differs in that proposal moves are derived from time discretization of an SDE whose solutions exactly preserve π. This results in ensemble MCMC schemes that converge rapidly on poorly conditioned distributions even in relatively high-dimensional sample spaces and when the details of the conditioning problems depend on position in sample space

  • We describe an efficient MCMC approach in which information from an ensemble of walkers provides an estimate of a modified local covariance matrix

Read more

Summary

Introduction

A popular family of methods for Bayesian parameterization in data analytics are derived as Markov chain Monte Carlo (MCMC) methods, including Hamiltonian (or hybrid) Monte Carlo (HMC)(Duane et al 1987; Neal 2011; Monnahan et al 2016), or the Metropolis adjusted Langevin algorithm (MALA)(Rossky et al 1978; Bou-Rabee and Vanden-Eijnden 2010; Roberts and Tweedie 1996). Our approach differs in that proposal moves are derived from time discretization of an SDE whose solutions exactly preserve π (or more precisely the joint density of an ensemble of independent random variables drawn from π ). This results in ensemble MCMC schemes that converge rapidly on poorly conditioned distributions even in relatively high-dimensional sample spaces and when the details of the conditioning problems depend on position in sample space.

Preconditioning strategies for sampling
16: Broadcast new positions of the walkers in group i
Gaussian mixture model
Log Gaussian Cox model
Conclusion
Findings
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call