Abstract

Markov chain Monte Carlo (MCMC) has transformed Bayesian model inference over the past three decades: mainly because of this, Bayesian inference is now a workhorse of applied scientists. Under general conditions, MCMC sampling converges asymptotically to the posterior distribution, but this provides no guarantees about its performance in finite time. The predominant method for monitoring convergence is to run multiple chains and monitor individual chains’ characteristics and compare these to the population as a whole: if within-chain and between-chain summaries are comparable, then this is taken to indicate that the chains have converged to a common stationary distribution. Here, we introduce a new method for diagnosing convergence based on how well a machine learning classifier model can successfully discriminate the individual chains. We call this convergence measure R ∗ . In contrast to the predominant R ˆ , R ∗ is a single statistic across all parameters that indicates lack of mixing, although individual variables’ importance for this metric can also be determined. Additionally, R ∗ is not based on any single characteristic of the sampling distribution; instead it uses all the information in the chain, including that given by the joint sampling distribution, which is currently largely overlooked by existing approaches. We recommend calculating R ∗ using two different machine learning classifiers — gradient-boosted regression trees and random forests — which each work well in models of different dimensions. Because each of these methods outputs a classification probability, as a byproduct, we obtain uncertainty in R ∗ . The method is straightforward to implement and could be a complementary additional check on MCMC convergence for applied analyses.

Highlights

  • Markov chain Monte Carlo (MCMC) is the class of exact-approximate methods that has contributed most to applied Bayesian inference in recent years

  • MCMC has made Bayesian inference widely available to a diverse community of practitioners through the many software packages that use it as an internal inference engine: from Gibbs sampling (Geman and Geman, 1984), which underpins the popular BUGS (Lunn et al, 2000) and JAGS (Plummer et al, 2003) libraries, to more recent algorithms: for example, Hamiltonian Monte Carlo (HMC) (Neal et al, 2011), the No U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014), and a dynamic HMC variant (Betancourt, 2017), which Stan (Carpenter et al, 2017), PyMC3 (Salvatier et al, 2016), Turing (Ge et al, 2018), TensorFlow Probability (Dillon et al, 2017) and Pyro

  • Given test data Xtest, number of chains N, number of iterations I, and fitted model, machine learning (ML)(x|Xtrain) → (p1, p2, . . . , pN ): for i = 1 to I do for s = 1 to Stest do Obtain test draw, x{s} = Xtest(s) ∈ RK

Read more

Summary

Introduction

Markov chain Monte Carlo (MCMC) is the class of exact-approximate methods that has contributed most to applied Bayesian inference in recent years. MCMC has made Bayesian inference widely available to a diverse community of practitioners through the many software packages that use it as an internal inference engine: from Gibbs sampling (Geman and Geman, 1984), which underpins the popular BUGS (Lunn et al, 2000) and JAGS (Plummer et al, 2003) libraries, to more recent algorithms: for example, Hamiltonian Monte Carlo (HMC) (Neal et al, 2011), the No U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014), and a dynamic HMC variant (Betancourt, 2017), which Stan (Carpenter et al, 2017), PyMC3 (Salvatier et al, 2016), Turing (Ge et al, 2018), TensorFlow Probability (Dillon et al, 2017) and Pyro Software packages (Stan (Carpenter et al, 2017), for example), go to great lengths to communicate to users any issues with sampling

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.