Abstract

ABSTRACTWe propose a new model selection algorithm (called ‘LassoSD’), which combines a step-down multiple testing approach with penalized minimization. First, we use the Lasso to discard a substantial part of irrelevant predictors. In the next step of the algorithm, we consider only the support of the Lasso and select the final model in a way that is motivated by multiple testing (analogously to the Bonferroni correction or the Holm method). We state nonasymptotic probabilistic inequalities, that upper bounds the model selection error of LassoSD in the high-dimensional linear model, i.e. the number of predictors can be much larger than the sample size. In the experimental part of the paper, we compare model selection properties of the algorithm to competitive procedures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call