Abstract
This paper shows how the theory of Dirichlet forms can be used to deliver proofs of optimal scaling results for Markov chain Monte Carlo algorithms (specifically, Metropolis–Hastings random walk samplers) under regularity conditions which are substantially weaker than those required by the original approach (based on the use of infinitesimal generators). The Dirichlet form methods have the added advantage of providing an explicit construction of the underlying infinite-dimensional context. In particular, this enables us directly to establish weak convergence to the relevant infinite-dimensional distributions.
Highlights
This paper focuses on Metropolis–Hastings random walk samplers based on a simple target, namely the joint distribution of a large independent sample taken from a fixed distribution satisfying modest regularity conditions
(This is the Dirichlet form corresponding to the continuous-time Markov process resulting from the Metropolis–Hastings Random Walk’ (MHRW) reformulated as a discrete-time Markov chain jumping at instants of an exponential clock of rate n.) The natural candidate for a limiting Dirichlet form is given by
The above work demonstrates that Dirichlet forms provide an effective methodology for treating the Optimal Scaling framework in its natural infinite-dimensional context, and for reducing the framework’s dependence on severe regularity conditions
Summary
Markov Chain Monte Carlo (MCMC) algorithms form a general and widespread computational methodology addressing the problem of drawing samples from complex and. G. Zanella et al / Stochastic Processes and their Applications 127 (2017) 4053–4082 intractable probability distributions [21,7]. Zanella et al / Stochastic Processes and their Applications 127 (2017) 4053–4082 intractable probability distributions [21,7] Because of their simplicity and their scalability to high-dimensional settings, MCMC algorithms are routinely used in many fields to obtain approximations of integrals that could not be tackled by common numerical methods. Algorithm generates a Markov chain as follows. Given a current state x, the chain samples a proposed value y from some symmetric transition kernel Q(x, ·) and moves to the proposal y with probability a(x, y). Adjusted Langevin Algorithm (MALA: [23])
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.