Abstract

We address the choice of the tuning parameter λ in ℓ1-penalized M-estimation. Our main concern is models which are highly non-linear, such as the Gaussian mixture model. The number of parameters p is moreover large, possibly larger than the number of observations n. The generic chaining technique of Talagrand (2005) is tailored for this problem. It leads to the choice λ≈logp/n, as in the standard Lasso procedure (which concerns the linear model and least squares loss).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.