Abstract
Abstract We study Bayesian procedures for sparse linear regression when the unknown error distribution is endowed with a non-parametric prior. Specifically, we put a symmetrized Dirichlet process mixture of Gaussian prior on the error density, where the mixing distributions are compactly supported. For the prior on regression coefficients, a mixture of point masses at zero and continuous distributions is considered. Under the assumption that the model is well specified, we study behavior of the posterior with diverging number of predictors. The compatibility and restricted eigenvalue conditions yield the minimax convergence rate of the regression coefficients in $\ell _1$- and $\ell _2$-norms, respectively. In addition, strong model selection consistency and a semi-parametric Bernstein–von Mises theorem are proven under slightly stronger conditions.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have