Abstract

We aim to improve upon the exploration of the general-purpose random walk Metropolis algorithm when the target has non-convex support Asubset {mathbb {R}}^d, by reusing proposals in A^c which would otherwise be rejected. The algorithm is Metropolis-class and under standard conditions the chain satisfies a strong law of large numbers and central limit theorem. Theoretical and numerical evidence of improved performance relative to random walk Metropolis are provided. Issues of implementation are discussed and numerical examples, including applications to global optimisation and rare event sampling, are presented.

Highlights

  • A key challenge for Markov chain Monte Carlo (MCMC) algorithms is the balance between global “exploration” and local “exploitation”

  • In this paper we present the skipping sampler, a general-purpose and implemented Metropolis-class algorithm which is capable of improving exploration of targets π with nontrivial support A, by reusing proposals lying outside A

  • The resulting Markov chain satisfies a strong law of large numbers and central limit theorem under essentially the same conditions as for random walk Metropolis (RWM), to which we provide theoretical and numerical performance comparisons

Read more

Summary

Introduction

A key challenge for Markov chain Monte Carlo (MCMC) algorithms is the balance between global “exploration” and local “exploitation”. In this paper we present the skipping sampler, a general-purpose and implemented Metropolis-class algorithm which is capable of improving exploration of targets π with nontrivial support A, by reusing proposals lying outside A For this to be useful, we make the following standing assumption: Assumption 1 π is a probability density function on Rd whose support. To accelerate global exploration of the state space in MCMC algorithms, several approaches have been developed including tempering, Hamiltonian Monte Carlo and piecewise deterministic methods (see Robert et al (2018) for a recent review) These methods are best suited to target densities with connected support, since the chain cannot cross regions where the target has zero density. A disconnected support would imply reducibility of the chain and its failure to converge to the target

72 Page 2 of 16
Related work
Skipping sampler
72 Page 4 of 16
Theoretical results
Implementation and extensions
Choice of q
Computational aspects
Anisotropy
Choice of K
The doubling trick
72 Page 8 of 16
Hybrid slice sampler
Numerical examples
General considerations
Rare event sampling
72 Page 10 of 16
Applications to optimisation
Monotonic skipping sampler
Augmented multistart method
72 Page 12 of 16
Skipping sampler as basin-hopping subroutine
72 Page 14 of 16
Proof of Theorem 1
Findings
Proof of Theorem 2
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call