Abstract
Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher–student setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.
Highlights
To cite this version: Sebastian Goldt, Madhu Advani, Andrew Saxe, Florent Krzakala, Lenka Zdeborová
Deep neural networks achieve stellar generalisation even when they have enough parameters to fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher–student setup, where one network, the student, is trained on data generated by another network, called the teacher
Our results indicate that achieving good generalisation in neural networks goes beyond the properties of stochastic gradient descent (SGD) alone and depends on the interplay of at least the algorithm, the model architecture, and the data set
Summary
Given a set of non-linear, coupled ODE such as equation (9), finding the asymptotic fixed points analytically to compute the generalisation error would seem to be impossible. We will focus on analysing the asymptotic fixed points found by numerically integrating the equations of motion The form of these fixed points will reveal a drastically different dependence of the test error on the overparameterisation of neural networks with different activation functions in the different setups we consider, despite them all being trained by SGD. This highlights the fact that good generalisation goes beyond the properties of just the algorithm. We note that several recent theorems [29,30,31] about the global convergence of SGD do not apply in our setting because we have a finite number of hidden units
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Statistical Mechanics: Theory and Experiment
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.