Abstract

Consider the standard linear model y = X +✏ , where the components of ✏ are iid standard normal errors. Park and Casella [14] consider a Bayesian treatment of this model with a Laplace/Inverse-Gamma prior on (,). They introduce a Data Augmentation approach that can be used to explore the resulting intractable posterior density, and call it the Bayesian lasso algorithm. In this paper, the Markov chain underlying the Bayesian lasso algorithm is shown to be geometrically ergodic, for arbitrary values of the sample size n and the number of variables p. This is important, as geometric ergodicity provides theoretical justification for the use of Markov chain CLT, which can then be used to obtain asymptotic standard errors for Markov chain based estimates of posterior quantities. Kyung et al [12] provide a proof of geometric ergodicity for the restricted case n p, but as we explain in this paper, their proof is incorrect. Our approach is different and more direct, and enables us to establish geometric ergodicity for arbitrary n and p.

Highlights

  • IntroductionNote that the estimate in (1.1) can be regarded as the posterior mode of β (conditional on σ2) if one puts independent Laplace priors on the entries of β

  • Consider the standard linear model y = Xβ + σǫ, where y =ni=1 ∈ Rn is the vector of observations, X is the design matrix, β ∈ Rp is the vector of regression coefficients, the components of ǫ are iid standard normal errors, and σ2 is the variance parameter

  • We have established geometric ergodicity of the Markov chain corresponding to the Bayesian lasso algorithm

Read more

Summary

Introduction

Note that the estimate in (1.1) can be regarded as the posterior mode of β (conditional on σ2) if one puts independent Laplace priors on the entries of β Based on this observation, several authors proposed a Bayesian analysis using a Laplace-like prior for β (see for example [3, 7, 20]). The Bayesian lasso Gibbs Markov chain is geometrically ergodic for n ≥ 3 and arbitrary p, X, λ. The Markov chain CLT holds and can be used to obtain asymptotic standard errors of posterior estimates. In the restricted case when n ≥ p, Kyung et al [12] contains, among other results, a proof of geometric ergodicity of the Bayesian lasso Gibbs Markov chain. A detailed explanation of the problems with the proof are provided in the appendix

The Bayesian lasso Markov chain
Drift condition
Minorization condition
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call