Abstract

Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, these modifications sometimes cannot converge properly due to the local minimum problem. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the convergence of a learning process is promising with a fast learning rate. Our performance investigation shows that the proposed algorithm always converges with a fast learning rate in two popular complicated applications whereas other popular fast learning algorithms give very poor global convergence capabilities in these two applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call