Abstract

In “High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks,” Liu, Ye, and Lee study a model fitting problem where there are much fewer data than problem dimensions. Of their particular focus are the scenarios where the commonly imposed sparsity assumption is relaxed, and the usual condition of the restricted strong convexity is absent. The results show that generalization performance can still be ensured in such settings, even if the problem dimensions grow exponentially. The authors further study the sample complexities of high-dimensional nonsmooth estimation and neural networks. Particularly for the latter, it is shown that, with explicit regularization, a neural network is provably generalizable, even if the sample size is only poly-logarithmic in the number of fitting parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call