We study the eigenvalue distribution of random matrices pertinent to the analysis of deep neural networks. The matrices resemble the product of the sample covariance matrices, however, an important difference is that the analog of the population covariance matrix is now a function of random data matrices (synaptic weight matrices in the deep neural network terminology). The problem has been treated in recent work [J. Pennington, S. Schoenholz and S. Ganguli, The emergence of spectral universality in deep networks, Proc. Mach. Learn. Res. 84 (2018) 1924–1932, arXiv:1802.09979] by using the techniques of free probability theory. Since, however, free probability theory deals with population covariance matrices which are independent of the data matrices, its applicability in this case has to be justified. The justification has been given in [L. Pastur, On random matrices arising in deep neural networks: Gaussian case, Pure Appl. Funct. Anal. (2020), in press, arXiv:2001.06188] for Gaussian data matrices with independent entries, a standard analytical model of free probability, by using a version of the techniques of random matrix theory. In this paper, we use another version of the techniques to extend the results of [L. Pastur, On random matrices arising in deep neural networks: Gaussian case, Pure Appl. Funct. Anal. (2020), in press, arXiv:2001.06188] to the case where the entries of the data matrices are just independent identically distributed random variables with zero mean and finite fourth moment. This, in particular, justifies the mean field approximation in the infinite width limit for the deep untrained neural networks and the property of the macroscopic universality of random matrix theory in this case.
Read full abstract