Abstract

Bayesian inference provides a flexible way of combining data with prior information. However, quantile regression is not equipped with a parametric likelihood, and therefore, Bayesian inference for quantile regression demands careful investigation. This paper considers the Bayesian empirical likelihood approach to quantile regression. Taking the empirical likelihood into a Bayesian framework, we show that the resultant posterior from any fixed prior is asymptotically normal; its mean shrinks toward the true parameter values, and its variance approaches that of the maximum empirical likelihood estimator. A more interesting case can be made for the Bayesian empirical likelihood when informative priors are used to explore commonality across quantiles. Regression quantiles that are computed separately at each percentile level tend to be highly variable in the data sparse areas (e.g., high or low percentile levels). Through empirical likelihood, the proposed method enables us to explore various forms of commonality across quantiles for efficiency gains. By using an MCMC algorithm in the computation, we avoid the daunting task of directly maximizing empirical likelihood. The finite sample performance of the proposed method is investigated empirically, where substantial efficiency gains are demonstrated with informative priors on common features across several percentile levels. A theoretical framework of shrinking priors is used in the paper to better understand the power of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call