Abstract

Variable selection in linear models is essential for improved inference and interpretation, an activity which has become even more critical for high dimensional data. In this article, we provide a selective review of some classical methods including Akaike information criterion, Bayesian information criterion, Mallow's Cp and risk inflation criterion, as well as regularization methods including Lasso, bridge regression, smoothly clipped absolute deviation, minimax concave penalty, adaptive Lasso, elastic‐net, and group Lasso. We discuss how to select the penalty parameters. We also provide a review for some screening procedures for ultra high dimensions. WIREs Comput Stat 2014, 6:1–9. doi: 10.1002/wics.1284This article is categorized under: Statistical Models > Linear Models Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods Statistical Models > Model Selection

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call