Abstract
Let π denote the posterior distribution that results when a random sample of size n from a d-dimensional location-scale Student's t distribution (with ν degrees of freedom) is combined with the standard noninformative prior. van Dyk and Meng developed an efficient Markov chain Monte Carlo (MCMC) algorithm for sampling from π and provided considerable empirical evidence to suggest that their algorithm converges to stationarity much faster than the standard data augmentation algorithm. In addition to its practical importance, this algorithm is interesting from a theoretical standpoint because it is based upon a Markov chain that is not positive recurrent. In this article, we formally analyze the relevant sub-Markov chain underlying van Dyk and Meng's algorithm. In particular, we establish drift and minorization conditions that show that, for many (d, ν, n) triples, the sub-Markov chain is geometrically ergodic. This is the first general, rigorous analysis of an MCMC algorithm based upon a nonpositive recurrent Markov chain. Moreover, our results are important from a practical standpoint because (1) geometric ergodicity guarantees the existence of central limit theorems that can be used to construct Monte Carlo standard errors and (2) the drift and minorization conditions themselves allow for the calculation of exact upper bounds on the total variation distance to stationarity. The results are illustrated using a simple numerical example.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.