Abstract

Quantile regression (QR) is an ideal alternative for depicting the conditional quantile functions of a response variable when the conditions of linear regression are unavailable. One advantage of QR in relation to the traditional mean regression is that the QR estimates are more robust against outliers and a large class of error distributions. Regularization methods have been verified to be effective in QR literature for simultaneously conducting parameter estimation and variable selection. This study considers a bridge-randomized penalty of regression coefficients by incorporating uncertainty penalty into Bayesian bridge QR. The asymmetric Laplace distribution (ALD) and the generalized Gaussian distribution (GGD) priors are imposed on model errors and regression coefficients, respectively, to establish a Bayesian bridge-randomized QR model. In addition, bridge penalty exponent is deemed as a parameter, and a Beta-distributed prior is forced on. By utilizing the normal-exponential and uniform-Gamma mixture representations of the ALD and the GGD, a Bayesian hierarchical model is constructed to conduct the fully Bayesian posterior inference. Gibbs sampler and Metropolis–Hastings algorithms are utilized to draw Markov chain Monte Carlo samples from the full conditional posterior distributions of all unknown parameters. Finally, the proposed procedures are illustrated by simulation studies and applied to a real-data analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call