Abstract
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting, that is, accurate predictions despite overfitting training data. In this article, we survey recent progress in statistical learning theory that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behaviour of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favourable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.
Highlights
The past decade has witnessed dramatic advances in machine learning that have led to major breakthroughs in computer vision, speech recognition, and robotics
We have considered the statistical performance of the empirical risk minimizer ferm without considering the computational cost of solving this optimization problem
It is instructive to consider the implications of the generalization bounds we have reviewed for the phenomenon of benign overfitting, which has been observed in deep learning
Summary
The past decade has witnessed dramatic advances in machine learning that have led to major breakthroughs in computer vision, speech recognition, and robotics. Deep learning reveals some major surprises from a theoretical perspective: deep learning methods can find near-optimal solutions to highly non-convex empirical risk minimization problems, solutions that give a near-perfect fit to noisy training data, but despite making no explicit effort to control model complexity, these methods lead to excellent prediction performance in practice. Deep learning exploits rich and expressive models, with many parameters, and the problem of optimizing the fit to the training data appears to simplify dramatically when the function class is rich enough, that is, when it is sufficiently overparametrized. The second surprising empirical discovery was that these models are outside the realm of uniform convergence They are enormously complex, with many parameters, they are trained with no explicit regularization to control their statistical complexity, and they typically exhibit a near-perfect fit to noisy training data, that is, empirical risk close to zero. It seems likely that depth is crucial for these issues of expressivity
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have