Abstract

The backpropagation algorithm is the most popular procedure to train self-learning feedforward neural networks. However, the rate of convergence of this algorithm is slow because the backpropagation algorithm is mainly a steepest descent method. Several researchers have proposed other approaches to improve the rate of convergence: conjugate gradient methods, dynamic modification of learning parameters, full quasi-Newton or Newton methods, stochastic methods, etc. Quasi-Newton methods were criticized because they require significant computation time and memory space to perform the update of the hessian matrix. This paper proposes a modification to the classical approach of the quasi-Newton method that takes into account the structure of the network. With this modification, the size of the problem is not proportional to the total number of weights but depends on the number of neurons of each level. The modified quasi-Newton method is tested on two examples and is compared to classical approaches. The numerical results show that this approach represents a clear gain in terms of computational time without increasing the requirement of memory space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call