Abstract

Gradient descent is the workhorse of deep neural networks. Gradient descent has the disadvantage of slow convergence. The famous way to overcome slow convergence is to use momentum. Momentum effectively increases the learning factor of gradient descent. Recently, many approaches have been proposed to control the momentum for better optimization towards global minima, such as Adam, diffGrad, and AdaBelief. Adam decreases the momentum by dividing it with square root of moving averages of squared past gradients or second moment. The sudden decrease in the second moment often results in the overshoot of the gradient from the minima and then settle at the closest minima. DiffGrad decreases this problem by using a friction constant based on the difference of current gradient and immediate past gradient in Adam. The friction constant further decreases the momentum and results in slow convergence. AdaBelief adapts the step size according to the belief in the current gradient direction. Another famous way of fast convergence is to increase the batch size adaptively. This paper proposes a new optimization technique named adaptive diff-batch or adadb that removes the problem of overshooting gradient in Adam, slow convergence in diffGrad, and combines the methods with adaptive batch size for further increase in convergence rate. The proposed technique uses the friction constant based on the past three differences of gradients rather than one as in diffGrad and a condition to decide the use of friction constant. The proposed technique has outperformed the Adam, diffGrad, and AdaBelief optimizers on synthetic complex non-convex functions and real-world datasets.

Highlights

  • In recent times, neural network-based algorithms are gaining popularity due to the availability of big data and large computing power in the form of GPUs

  • Many attempts have been made to optimize the convergence of gradient descent to get the true benefits of big data and large computing power with neural networks

  • The most famous way of increasing the convergence rate of gradient descent is the use of momentum

Read more

Summary

INTRODUCTION

Neural network-based algorithms are gaining popularity due to the availability of big data and large computing power in the form of GPUs. The most famous way of increasing the convergence rate of gradient descent is the use of momentum. There are two renowned methods of controlling the convergence rate: reduce the momentum and increase the batch size. Adam [3], diffGrad [4], and AdaBelief [5] optimization techniques reduce the momentum whereas adabatch technique [6] increases the batch size for better and fast optimization towards the global minima. Adam and AdaBelief optimizer techniques often overshoot the global minima whereas diffGrad suffers from the slow convergence of the solution. We use both methods, control of convergence rate and increase in batch size. Combining adaptive batch size with convergence technique is based on our previous work that has shown success in improving the convergence rate [7].

BASICS OF GRADIENT DESCENT
PROPOSED METHODOLOGY
CONVERGENCE ANALYSIS
VIII. RESULTS AND DISCUSSION
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call