Abstract

Abstract We study Bayesian procedures for sparse linear regression when the unknown error distribution is endowed with a non-parametric prior. Specifically, we put a symmetrized Dirichlet process mixture of Gaussian prior on the error density, where the mixing distributions are compactly supported. For the prior on regression coefficients, a mixture of point masses at zero and continuous distributions is considered. Under the assumption that the model is well specified, we study behavior of the posterior with diverging number of predictors. The compatibility and restricted eigenvalue conditions yield the minimax convergence rate of the regression coefficients in $\ell _1$- and $\ell _2$-norms, respectively. In addition, strong model selection consistency and a semi-parametric Bernstein–von Mises theorem are proven under slightly stronger conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.