Abstract

Ridge and lasso regression models, which are also known as regularization methods, are widely used methods in machine learning and inverse problems that introduce additional information to solve ill-posed problems and/or perform feature selection. The ridge and lasso estimates for linear regression parameters can be interpreted as Bayesian posterior estimates when the regression parameters have Normal and independent Laplace (i.e., double-exponential) priors, respectively. A significant challenge in regularization problems is that these approaches assume that data are normally distributed, which makes them not robust to model misspecification. A Bayesian approach for ridge and lasso models based on empirical likelihood is proposed. This method is semiparametric because it combines a nonparametric model and a parametric model. Hence, problems with model misspecification are avoided. Under the Bayesian empirical likelihood approach, the resulting posterior distribution lacks a closed form and has a nonconvex support, which makes the implementation of traditional Markov chain Monte Carlo (MCMC) methods such as Gibbs sampling and Metropolis–Hastings very challenging. To solve the nonconvex optimization and nonconvergence problems, the tailored Metropolis–Hastings approach is implemented. The asymptotic Bayesian credible intervals are derived.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call