Abstract

In the past decade, many Bayesian shrinkage models have been developed for linear regression problems where the number of covariates, p, is large. Computation of the intractable posterior is often done with three-block Gibbs samplers (3BG), based on representing the shrinkage priors as scale mixtures of Normal distributions. An alternative computing tool is a state of the art Hamiltonian Monte Carlo (HMC) method, which can be easily implemented in the Stan software. However, we found both existing methods to be inefficient and often impractical for large p problems. Following the general idea of Rajaratnam et al., we propose two-block Gibbs samplers (2BG) for three commonly used shrinkage models, namely, the Bayesian group lasso, the Bayesian sparse group lasso, and the Bayesian fused lasso models. We demonstrate with simulated and real data examples that the Markov chains underlying 2BG’s converge much faster than that of 3BG’s, and no worse than that of HMC. At the same time, the computing costs of 2BG’s per iteration are as low as that of 3BG’s, and can be several orders of magnitude lower than that of HMC. As a result, the newly proposed 2BG is the only practical computing solution to do Bayesian shrinkage analysis for datasets with large p. Further, we provide theoretical justifications for the superior performance of 2BG’s. We establish geometric ergodicity of Markov chains associated with the 2BG for each of the three Bayesian shrinkage models. We also prove, for most cases of the Bayesian group lasso and the Bayesian sparse group lasso model, the Markov operators for the 2BG chains are trace-class. Whereas for all cases of all three Bayesian shrinkage models, the Markov operator for the 3BG chains is not even Hilbert–Schmidt. Supplementary materials for this article are available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call