Abstract

The success of deep learning in many real-world tasks has triggered an intense effort to understand the power and limitations of deep learning in the training and generalization of complex tasks, so far with limited progress. In this work, we study the statistical mechanics of learning in Deep Linear Neural Networks (DLNNs) in which the input-output function of an individual unit is linear. Despite the linearity of the units, learning in DLNNs is nonlinear, hence studying its properties reveals some of the features of nonlinear Deep Neural Networks (DNNs). Importantly, we solve exactly the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space. To do this, we introduce the Back-Propagating Kernel Renormalization (BPKR), which allows for the incremental integration of the network weights starting from the network output layer and progressing backward until the first layer's weights are integrated out. This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity. BPKR does not assume specific statistics of the input or the task's output. Furthermore, by performing partial integration of the layers, the BPKR allows us to compute the properties of the neural representations across the different hidden layers. We have proposed an extension of the BPKR to nonlinear DNNs with ReLU. Surprisingly, our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks in a wide regime of parameters. Our work is the first exact statistical mechanical study of learning in a family of DNNs, and the first successful theory of learning through successive integration of DoFs in the learned weight space.

Highlights

  • Gradient-based learning in multilayered neural networks has achieved surprising success in many real-world problems including machine vision, speech recognition, natural language processing, and multi-agent games [1–4]

  • Renormalization of the order parameters: Here we show that the order parameters ul undergo a trivial renormalization upon averaging

  • Performing integration over Wl with the same approach we used to compute the partition function Zl−1 above, and introducing the same order parameter ul−1, we reduce the above expression to

Read more

Summary

INTRODUCTION

Gradient-based learning in multilayered neural networks has achieved surprising success in many real-world problems including machine vision, speech recognition, natural language processing, and multi-agent games [1–4]. It is well known that that the ensemble of input-output functions implemented by infinitely wide networks are equivalent to a Gaussian Process (GP) in function space with covariance matrix defined by a Gaussian kernel, which is the kernel matrix averaged over weights sampled from the Gaussian distribution This GP limit holds when the network width, the number of neurons in each layer, N , approaches infinity while the size of the training data, P , is held constant, severely limiting its applicability to most realistic conditions.

Statistical mechanics of learning in deep networks
The Back-Propagating Kernel Renormalization
BPKR for narrow architectures
Predictor statistics
GENERALIZATION
Dependence of generalization on noise
Dependence of generalization on width
Dependence of generalization on depth
Varying the size of the training set
Multiple outputs
Finite temperature
Layerwise mean kernels
Mean inverse kernels
Approximate BPKR for ReLU networks
Generalization in ReLU networks
DISCUSSION
25 Appendix C
Finite T effects
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call