Abstract

Classical adaptive lasso regression is known to possess the oracle properties; namely, it performs as well as if the correct submodel were known in advance. However, it requires consistent initial estimates of the regression coefficients, which are generally not available in high dimensional settings. In addition, none of the algorithms used to obtain the adaptive lasso estimators provide a valid measure of standard error. To overcome these drawbacks, some Bayesian approaches have been proposed to obtain the adaptive lasso and related estimators. In this paper, we consider a fully Bayesian treatment for the adaptive lasso that leads to a new Gibbs sampler with tractable full conditional posteriors. Through simulations and real data analyses, we compare the performance of the new Gibbs sampler with some of the existing Bayesian and non-Bayesian methods. Results show that the new approach performs well in comparison to the existing Bayesian and non-Bayesian approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.