Abstract

Recently, variable selection based on penalized regression methods has received a great deal of attention, mostly through frequentist’s models. This paper investigates regularization regression from Bayesian perspective. Our new method extends the Bayesian Lasso regression (Park and Casella, 2008) through replacing the least square loss and Lasso penalty by composite quantile loss function and adaptive Lasso penalty, which allows different penalization parameters for different regression coefficients. Based on the Bayesian hierarchical model framework, an efficient Gibbs sampler is derived to simulate the parameters from posterior distributions. Furthermore, we study the Bayesian composite quantile regression with adaptive group Lasso penalty. The distinguishing characteristic of the newly proposed method is completely data adaptive without requiring prior knowledge of the error distribution. Extensive simulations and two real data examples are used to examine the good performance of the proposed method. All results confirm that our novel method has both robustness and high efficiency and often outperforms other approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call