Abstract

Use of continuous shrinkage priors — with a “spike” near zero and heavy-tails towards infinity — is an increasingly popular approach to induce sparsity in parameter estimates. When the parameters are only weakly identified by the likelihood, however, the posterior may end up with tails as heavy as the prior, jeopardizing robustness of inference. A natural solution is to “shrink the shoulders” of a shrinkage prior by lightening up its tails beyond a reasonable parameter range, yielding a regularized version of the prior. We develop a regularization approach which, unlike previous proposals, preserves computationally attractive structures of original shrinkage priors. We study theoretical properties of the Gibbs sampler on resulting posterior distributions, with emphasis on convergence rates of the Pólya-Gamma Gibbs sampler for sparse logistic regression. Our analysis shows that the proposed regularization leads to geometric ergodicity under a broad range of global-local shrinkage priors. Essentially, the only requirement is for the prior πlocal(⋅) on the local scale λ to satisfy πlocal(0)<∞. If πlocal(⋅) further satisfies limλ→0πlocal(λ)∕λa<∞ for a>0, as in the case of Bayesian bridge priors, we show the sampler to be uniformly ergodic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call