Abstract

Sequential Monte Carlo (SMC) samplers are an alternative to MCMC for Bayesian computation. However, their performance depends strongly on the Markov kernels used to rejuvenate particles. We discuss how to calibrate automatically (using the current particles) Hamiltonian Monte Carlo kernels within SMC. To do so, we build upon the adaptive SMC approach of Fearnhead and Taylor (2013), and we also suggest alternative methods. We illustrate the advantages of using HMC kernels within an SMC sampler via an extensive numerical study.

Highlights

  • Sequential Monte Carlo (SMC) samplers (Neal, 2001; Chopin, 2002; Del Moral et al, 2006) approximate a target distribution π by sampling particles from an initial distribution π0, and moving them through a sequence of distributions πt which ends at πT = π

  • The main contribution of this paper is to investigate this tuning in the case of Hamiltonian Monte Carlo (HMC) kernels, and is described in Section 3. (a) Choice of the exponent A common approach (Jasra et al, 2011; Schafer and Chopin, 2013) to choose adaptively intermediate distributions within SMC is to rely on the ESS

  • In order to assess the performance of the two tuning procedures (FT and PR), we compare the tuning parameters obtained at the final stage of our SMC samplers (HMCAFT and HMCAPR) with those obtained from the following Markov chain Monte Carlo (MCMC) procedures: NUTS (Hoffman and Gelman, 2014) and the adaptive MCMC algorithm of Mohamed et al (2013)

Read more

Summary

Introduction

In Bayesian computation this approach has several advantages over Markov chain Monte Carlo (MCMC). It enables the estimation of normalizing constants and can be used for model choice (Zhou et al, 2016). The propagation of the particles commonly relies on MCMC kernels, which depend on some tuning parameters. Choosing these parameters in a sensible manner is challenging and is of interest both from a theoretical and practical point of view; see Fearnhead and Taylor (2013); Schafer and Chopin (2013); Beskos et al (2016).

Background
Sequential Monte Carlo samplers
Result
Tuning of Hamiltonian Monte Carlo within Sequential Monte Carlo
Tuning of the mass matrix of the kernels
Discussion of the tuning procedures
Experiments
Tempering from an isotropic Gaussian to a shifted correlated Gaussian
Tempering from a Gaussian to a mixture of two correlated Gaussians
Binary regression posterior
Log Gaussian Cox process model
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call