Abstract

We study the estimation property of the Elastic Net estimator in high-dimensional linear regression models where the number of parameters p is comparable to or larger than the sample size n. In such a situation, one often assumes sparsity of the true regression coefficient vector , i.e., assuming that belongs to an -ball with radius , , for some . In this paper, we provide -estimation error bounds for the Elastic Net and naive Elastic Net estimators under a unified framework for high-dimensional analysis of M-estimators proposed by Negahban et al. [A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Adv Neural Inf Process Syst. 2009;22:1348–1356]. We show that for both cases of exact sparsity and weak sparsity, under the same conditions on the design matrix, the Elastic Net estimator achieves a slightly better error bound than the Lasso estimator by suitably choosing the tuning parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call