Abstract

In this paper we describe several different training algorithms for feed forward neural networks(FFNN). In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems.

Highlights

  • Back propagation (BP) process can train multilayer FFNN’s

  • The during the training process, as the algorithm moves across the performance surface

  • We describe in some detail one-dimensional search procedure that is guaranteed to find a learning rate satisfying the strong Wolfe conditions (1)

Read more

Summary

INTRODUCTION

Back propagation (BP) process can train multilayer FFNN’s. With differentiable transfer functions, to perform a function approximation to continuous function f Rn, pattern association and pattern classification. 3.BFGS Algorithm (TRAINBFG ); The basic step of this method is x k+1 = xk - Ak-1 gk where Ak is the Hessian matrix (second derivatives)[1] of the performance index at the current values of the weights and biases and gk is the gradient of the error surface at w(k). This method often converges faster than conjugate gradient methods. In most of the training algorithms a learning rate is used to determine the length of the weight update (step size). It does require the computation of the derivatives (back propagation) in addition to the computation of performance function, but it over comes this limitation by locating the minimum with fewer steps

SPEED AND MEMORY COMPARISON:
LIMITATIONS AND CAUTIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call