Abstract

With Deep Neural Networks being the state-of-art architecture to solve complex tasks of Natural Language Processing (NLP) and Image Processing, Convolutional Neural Networks (CNN) have proven to outperform the existing architectures in these domains. Every Neural Network consists of two kinds of basic units called the Non-Trainable and Trainable parameters. The Non-Trainable parameters, cannot be modified and are automatically determined by the neural network. The Trainable parameters, can be explicitly altered which has a direct impact on the performance of the network. To improvise the performance of a neural network, some of the trainable parameters called the Hyper-parameters need to be fine-tuned. The most important hyper-parameter that helps to quickly to identify an Optimal Point, is the Learning Rate. Learning Rate is a parameter based on which the weights of the neurons in the network are adjusted with respect to a loss function. Improper initialization of this parameter could lead the network to get stuck in saddle points or local minimum points. An optimistic approach to initialize the learning rate and following suitable decay policy called Finding Learning Rate’s Optimal Boundaries (FLOB) has been proposed and experimented with GoogleNet on datasets like Fashion-MNIST, CIFAR-10 and Caltech-256 and evaluated with standard metrics in this article.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call