Abstract

The Metropolis algorithm and its variants are perhaps the most widely used methods of generating Markov chains with a specified equilibrium distribution. We study an extension of the Metropolis algorithm from both a decision theoretic and rate of convergence point of view. The decision theoretic approach was first taken by Peskun (1973) who showed some optimality properties of the classical Metropolis sampler. In this article, we propose an extension of the Metropolis algorithm which reduces the asymptotic variance and accelerates the convergence rate of its classic form. The principle method used to improve the properties of a sampler is to move mass from the diagonal elements of the Markov chain's transition matrix to the off-diagonal elements. A low dimensional example is given to illustrate that our extended algorithm converges to stationary distribution in the fastest possible order n steps, while the conventional Metropolis chain takes at least order n 2log(n) steps.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.