Abstract

This paper deals with the distribution of singular values of the input–output Jacobian of deep untrained neural networks in the limit of their infinite width. The Jacobian is the product of random matrices where the independent weight matrices alternate with diagonal matrices whose entries depend on the corresponding column of the nearest neighbor weight matrix. The problem has been considered in the several recent studies of the field for the Gaussian weights and biases and also for the weights that are Haar distributed orthogonal matrices and Gaussian biases. Based on a free probability argument, it was claimed in those papers that, in the limit of infinite width (matrix size), the singular value distribution of the Jacobian coincides with that of the analog of the Jacobian with special random but weight independent diagonal matrices, the case well known in random matrix theory. In this paper, we justify the claim for random Haar distributed weight matrices and Gaussian biases. This, in particular, justifies the validity of the mean field approximation in the infinite width limit for the deep untrained neural networks and extends the macroscopic universality of random matrix theory to this new class of random matrices.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.