Abstract

<p style='text-indent:20px;'>We consider shallow (single hidden layer) neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units <inline-formula><tex-math id="M1">\begin{document}$ N $\end{document}</tex-math></inline-formula> and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the <inline-formula><tex-math id="M2">\begin{document}$ 1/\sqrt{N} $\end{document}</tex-math></inline-formula> and the mean-field <inline-formula><tex-math id="M3">\begin{document}$ 1/N $\end{document}</tex-math></inline-formula> normalization. We develop an asymptotic expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion, we demonstrate mathematically that to leading order in <inline-formula><tex-math id="M4">\begin{document}$ N $\end{document}</tex-math></inline-formula>, there is no bias-variance trade off, in that both bias and variance (both explicitly characterized) decrease as the number of hidden units increases and time grows. In addition, we show that to leading order in <inline-formula><tex-math id="M5">\begin{document}$ N $\end{document}</tex-math></inline-formula>, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization. Numerical studies on the MNIST and CIFAR10 datasets show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call