Abstract

Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently.

Highlights

  • Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy

  • In Markov chain Monte Carlo (MCMC) techniques, the primary target is an accurate estimate of the posterior distribution. (We note that an alternative stochastic method for exploring a model parameter space, nested sampling [1,2,3], focuses on evidence computation rather than sampling the posterior probability density functions.) It is not straightforward to compute the model evidence from MCMC samples

  • The most direct way to estimate the evidence for a model from MCMC samples is to compute the harmonic-mean estimator, but this estimator of the evidence can suffer from infinite variance [4,5,6,7]

Read more

Summary

Bayesian analysis

Where p(θi|d, Mi) is the posterior distribution for the model parameters θi implied by the data in the context of model Mi, p(θ i|Mi) is the prior probability of the model parameters that represents our beliefs before accumulating any of the data d, and p(d|Mi), called the evidence, is an overall normalizing constant which ensures that p(θ i|d, Mi) is properly normalized as a probability distribution on the θi. This implies that the evidence is equal to p(d|Mi) = dθ iL(d|θ i, Mi)p(θ i|Mi), Vi (2.2). Nested sampling can be used to compute the posterior PDFs within each model along with the evidences for the various models, the most common technique for computing posterior PDFs in the context of a model is the Markov chain Monte Carlo, which we describe

Markov chain Monte Carlo
Reversible-jump Markov chain Monte Carlo
Reversible-jump Markov chain Monte Carlo efficiency
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.