Abstract

As one of standard approaches to train deep neural networks, dropout has been applied to regularize large models to avoid overfitting, and the improvement in performance by dropout has been explained as avoiding co-adaptation between nodes. However, when correlations between nodes are compared after training the networks with or without dropout, one question arises if co-adaptation avoidance explains the dropout effect completely. In this paper, we propose an additional explanation of why dropout works and propose a new technique to design better activation functions. First, we show that dropout can be explained as an optimization technique to push the input towards the saturation area of nonlinear activation function by accelerating gradient information flowing even in the saturation area in backpropagation. Based on this explanation, we propose a new technique for activation functions, gradient acceleration in activation function (GAAF), that accelerates gradients to flow even in the saturation area. Then, input to the activation function can climb onto the saturation area which makes the network more robust because the model converges on a flat region. Experiment results support our explanation of dropout and confirm that the proposed GAAF technique improves image classification performance with expected properties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call