Abstract
Ridge and lasso regression models, which are also known as regularization methods, are widely used methods in machine learning and inverse problems that introduce additional information to solve ill-posed problems and/or perform feature selection. The ridge and lasso estimates for linear regression parameters can be interpreted as Bayesian posterior estimates when the regression parameters have Normal and independent Laplace (i.e., double-exponential) priors, respectively. A significant challenge in regularization problems is that these approaches assume that data are normally distributed, which makes them not robust to model misspecification. A Bayesian approach for ridge and lasso models based on empirical likelihood is proposed. This method is semiparametric because it combines a nonparametric model and a parametric model. Hence, problems with model misspecification are avoided. Under the Bayesian empirical likelihood approach, the resulting posterior distribution lacks a closed form and has a nonconvex support, which makes the implementation of traditional Markov chain Monte Carlo (MCMC) methods such as Gibbs sampling and Metropolis–Hastings very challenging. To solve the nonconvex optimization and nonconvergence problems, the tailored Metropolis–Hastings approach is implemented. The asymptotic Bayesian credible intervals are derived.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.