Abstract

We revisit the weight initialization of deep residual networks (ResNets) by introducing a novel analytical tool in free probability to the community of deep learning. This tool deals with the limiting spectral distribution of non-Hermitian random matrices, rather than their conventional Hermitian counterparts in the literature. This new tool enables us to evaluate the singular value spectrum of the input-output Jacobian of a fully connected deep ResNet in both linear and nonlinear cases. With the powerful tool of free probability, we conduct an asymptotic analysis of the (limiting) spectrum on the single-layer case, and then extend this analysis to the multi-layer case of an arbitrary number of layers. The asymptotic analysis illustrates the necessity and university of rescaling the classical random initialization by the number of residual units L, so that the squared singular value of the associated Jacobian remains of order O(1), when compared with the large width and depth of the network. We empirically demonstrate that the proposed initialization scheme learns at a speed of orders of magnitudes faster than the classical ones, and thus attests a strong practical relevance of this investigation.

Highlights

  • Deep neural networks have obtained impressive achievements in numerous fields from computer vision [1] to speech recognition [2] and natural language processing [3]

  • We refer to this property as the ‘‘Spectrum Concentration’’ of the Jacobian matrix, that is different from the similar concept of ‘‘Dynamical Isometry’’ [7] demanding that all singular values remain close to 1

  • In this article, exploiting advanced tools in random matrix theory in the regime of large network width and depth, we prove that, for residual networks (ResNets), the variance of the random weights should be scaled as a function of the number of layers, so as to prevent the gradients vanishing or exploding problem via spectrum concentration

Read more

Summary

Introduction

Deep neural networks have obtained impressive achievements in numerous fields from computer vision [1] to speech recognition [2] and natural language processing [3]. Preserve the norm of a randomly chosen error vector through backpropagation, the squared singular values of the Jacobian matrix shall remain to be the order of O(1), compared with the (possibly) tremendous width or depth of the network. We refer to this property as the ‘‘Spectrum Concentration’’ of the Jacobian matrix, that is different from the similar concept of ‘‘Dynamical Isometry’’ [7] demanding that all singular values remain close to 1

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call