Abstract

Improving the generalization performance of deep neural networks (DNNs) trained by minibatch stochastic gradient descent (SGD) has raised lots of concerns from deep learning practitioners. The standard simple random sampling (SRS) scheme used in minibatch SGD treats all training samples equally in gradient estimation. In this article, we study a new data selection method based on the intrinsic property of the training set to help DNNs have better generalization performance. Our theoretical analysis suggests that this new sampling scheme, called the nontypicality sampling scheme, boosts the generalization performance of DNNs through biasing the solution toward wider minima, under certain assumptions. We confirm our findings experimentally and show that more variants of minibatch SGD can also benefit from the new sampling scheme. Finally, we discuss an extension of the nontypicality sampling scheme that holds promise to enhance both generalization performance and convergence speed of minibatch SGD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call