Abstract

The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and effectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the inefficiency of zigzagging in the conventional backpropagation algorithm, one of the authors earlier proposed the use of a deflecting gradient technique to improve the convergence of backpropagation learning algorithm. The proposed method is called Partan backpropagation learning algorithm[3]. The convergence time of multilayer networks has further improved through dynamic adaptation of their learning rates[6]. In this paper, an extension to the dynamic parallel tangent learning algorithm is proposed. In the proposed algorithm, each connection has its own learning as well as acceleration rate. These individual rates are dynamically adapted as the learning proceeds. Simulation studies are carried out on different learning problems. Faster rate of convergence is achieved for all problems used in the simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call