Abstract

The Bayesian probit regression model (Albert and Chib (1993)) is popular and widely used for binary regression. While the improper flat prior for the regression coefficients is an appropriate choice in the absence of any prior information, a proper normal prior is desirable when prior information is available or in modern high dimensional settings where the number of coefficients ($p$) is greater than the sample size ($n$). For both choices of priors, the resulting posterior density is intractable and a Data Dugmentation (DA) Markov chain is used to generate approximate samples from the posterior distribution. Establishing geometric ergodicity for this DA Markov chain is important as it provides theoretical guarantees for constructing standard errors for Markov chain based estimates of posterior quantities. In this paper, we first show that in case of proper normal priors, the DA Markov chain is geometrically ergodic *for all* choices of the design matrix $X$, $n$ and $p$ (unlike the improper prior case, where $n \geq p$ and another condition on $X$ are required for posterior propriety itself). We also derive sufficient conditions under which the DA Markov chain is trace-class, i.e., the eigenvalues of the corresponding operator are summable. In particular, this allows us to conclude that the Haar PX-DA sandwich algorithm (obtained by inserting an inexpensive extra step in between the two steps of the DA algorithm) is strictly better than the DA algorithm in an appropriate sense.

Highlights

  • Let Y1, · · ·, Yn be independent Bernoulli random variables with P (Yi = 1|β) = Φ(xTi β) where xi ∈ Rp is the vector of known covariates corresponding to the ith observation Yi, for i = 1, · · ·, n; β ∈ Rp is a vector of unknown regression coefficients and Φ(·) denotes the standard normal distribution function

  • Roy and Hobert [22] prove the geometric ergodicity of the resultant algorithm when an improper flat prior on β is considered, and derive the PX-Data Augmentation (DA) sandwich algorithm in this setting

  • A method for sampling from a density that appears in the Haar PX-DA algorithm is described in Appendix B

Read more

Summary

Introduction

It can be shown that the transition density for Ψ is strictly positive everywhere, which implies that Ψ is Harris ergodic (see Asmussen and Glynn [2]) It follows that cumulative averages based on the above Markov chain can be used to consistently estimate corresponding posterior expectations. Roy and Hobert [22] prove the geometric ergodicity of the resultant algorithm when an improper flat prior (instead of a proper normal prior) on β is considered, and derive the PX-DA sandwich algorithm in this setting Unlike our paper, these authors construct minorization conditions that allow them to use regeneration techniques for the consistent estimation of asymptotic variances. A method for sampling from a density that appears in the Haar PX-DA algorithm is described in Appendix B

Geometric ergodicity for the AC-DA chain
Trace-class property for the AC-DA chain
Sandwich algorithms
Illustration
T V DUT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call