Abstract

The gradient descent algorithms like backpropagation (BP) or its variations on multi-layered feed-forward networks are widely used in many applications. However, the most serious problem associated with the BP is local minima problem. Especially, an exceeding number of hidden nodes make the corresponding network deepen the local minima problem. We propose an algorithm which shows stable performance on training despite of the large number of hidden nodes. This algorithm is called separate learning algorithm in which hidden-to-output and input-to-hidden separately trained. Simulations on some benchmark problems have been performed to demonstrate the validity of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call