Abstract

ABSTRACTThe adaptive least absolute shrinkage and selection operator (Lasso) and least absolute deviation (LAD)-Lasso are two attractive shrinkage methods for simultaneous variable selection and regression parameter estimation. While the adaptive Lasso is efficient for small magnitude errors, LAD-Lasso is robust against heavy-tailed errors and severe outliers. In this article, we consider a data-driven convex combination of these two modern procedures to produce a robust adaptive Lasso, which not only enjoys the oracle properties, but synthesizes the advantages of the adaptive Lasso and LAD-Lasso. It fully adapts to different error structures including the infinite variance case and automatically chooses the optimal weight to achieve both robustness and high efficiency. Extensive simulation studies demonstrate a good finite sample performance of the robust adaptive Lasso. Two data sets are analyzed to illustrate the practical use of the procedure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call