Abstract

Most state-of-the-art results on image classification tasks were obtained by residual neural networks, which use stochastic gradient descent (SGD) with momentum for training. In most cases, the learning rate drops by a constant factor every pre-defined number of epochs. However, it is difficult and time-consuming to estimate how many epochs to drop the learning rate. To tackle this problem, cyclical learning rate is gaining popularity in gradient-based optimization to improve the convergence speed in accelerated gradient schemes. But cyclical learning rate scheme scans a broad range of learning rate, some of which are not suitable for deep neural network training. In this paper, we propose a simple yet effective exponential decay sine wave like learning rate technique for SGD to improve its convergence speed. In the training process, the learning rate would vary in sine wave way. While the maximum value of sine wave would decay exponentially along with training epochs. An ensemble of wide residual nets with our proposed learning scheme achieves 3.01% and 16.03% errors on CIFAR-10 and CIFAR-100 respectively. Furthermore, our proposed method uses far less number of epochs than most recent learning rate strategies, accelerating neural network training tremendously.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.