Abstract
In this paper, we provide explicit upper bounds on some distances between the (law of the) output of a random Gaussian neural network and (the law of) a random Gaussian vector. Our main results concern deep random Gaussian neural networks with a rather general activation function. The upper bounds show how the widths of the layers, the activation function, and other architecture parameters affect the Gaussian approximation of the output. Our techniques, relying on Stein’s method and integration by parts formulas for the Gaussian law, yield estimates on distances that are indeed integral probability metrics and include the convex distance. This latter metric is defined by testing against indicator functions of measurable convex sets and so allows for accurate estimates of the probability that the output is localized in some region of the space, which is an aspect of a significant interest both from a practitioner’s and a theorist’s perspective. We illustrated our results by some numerical examples. Funding: This research was supported by the European Union’s Horizon 2020 research project WARIFA under grant agreement no. 101017385, by the PRIN project 2022 “Variational Analysis of Complex Systems in Materials Science, Physics and Biology” (CUP B53D23009290006), and by the INdAM project “Modelli ed Algoritmi per dati ad elevata dimensionalità” (CUP E53C23001670001).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.