Abstract

Multilayer feed-forward neural network is widely used based on minimization of an error function. Back propagation is a famous training method used in the multilayer networks but it often suffers from the problems of local minima and slow convergence. These problems take place due to the gradient behavior of mostly used sigmoid activation function (SAF). Weight update becomes zero when activation of a unit tends to be unity or zero. To alleviate this problem, we propose a damped noisy gradient (DNG) in order to train a neural network (NN). A simple damped Gaussian noise is added intentionally in the gradient of the sigmoid activation function (AF). Validity of the proposed method is examined by performing simulations on real classification tasks such as the Heart disease, the Ionosphere, Wine, Horse, Glass and Soybean datasets. The algorithm is shown to work better than the original back-propagation (BP), the BP with logarithmic (LOG) and arctangent (ATAN) AFs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call