Abstract

The learning algorithms of multilayered feed-forward networks can be classified into two categories, gradient and non-gradient kinds. The gradient descent algorithms like backpropagation (BP) or its variations are widely used in many application areas because of convenience. However, the most serious problem associated with the BP is local minima problem. We propose an improved gradient descent algorithm intended to weaken the local minima problem without doing any harm to simplicity of the gradient descent method. This algorithm is called dual gradient learning algorithm in which the upper connections (hidden-to-output) and the lower connections (input-to-hidden) separately evaluated and trained. To do so, the target values of hidden layer units are introduced to be used as evaluation criteria of the lower connections. Simulations on some benchmark problems and a real classification task have been performed to demonstrate the validity of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call