Abstract

Markov chain Monte Carlo (MCMC) is widely used for Bayesian inference in models of complex systems. Performance, however, is often unsatisfactory in models with many latent variables due to so-called poor mixing, necessitating the development of application-specific implementations. This paper introduces ‘posterior-based proposals' (PBPs), a new type of MCMC update applicable to a huge class of statistical models (whose conditional dependence structures are represented by directed acyclic graphs). PBPs generate large joint updates in parameter and latent variable space, while retaining good acceptance rates (typically 33%). Evaluation against other approaches (from standard Gibbs/random walk updates to state-of-the-art Hamiltonian and particle MCMC methods) was carried out for widely varying model types: an individual-based model for disease diagnostic test data, a financial stochastic volatility model, a mixed model used in statistical genetics and a population model used in ecology. While different methods worked better or worse in different scenarios, PBPs were found to be either near to the fastest or significantly faster than the next best approach (by up to a factor of 10). PBPs, therefore, represent an additional general purpose technique that can be usefully applied in a wide variety of contexts.

Highlights

  • Markov chain Monte Carlo (MCMC) techniques allow correlated samples to be drawn from essentially any probability distribution by iteratively generating successive values of a carefully constructed Markov chain

  • non-centred parametrization (NCP) Hamiltonian MCMC (HMCMC) led to a marked improvement, but still it remained considerably slower than the other methods

  • The reason is that for this particular model NCP and model-based proposals (MBPs) work in much the same way: under NCP, proposals in σa lead to a simultaneous expansion or contraction of the random effects, and this is what happens in MBPs when the value of κ in table 1 is set to zero

Read more

Summary

Introduction

Markov chain Monte Carlo (MCMC) techniques allow correlated samples to be drawn from essentially any probability distribution by iteratively generating successive values of a carefully constructed Markov chain. This manifests itself as a high degree of correlation between consecutive samples 2 along the Markov chain, so requiring a very large number of iterations to adequately explore the posterior [1]. This limitation is of practical importance, because it restricts the possible models to which MCMC can realistically be applied. The focus of this paper is to introduce and explore a new approach that helps alleviate these mixing problems, reducing the computational time necessary to generate accurate inference.

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call