Abstract

Learning in the context of neural networks means finding a set of synaptic weights that make the network perform the desired function. Backpropagation it has two major drawbacks in its leaning efficiency such as slow learning speed and convergence to local minima. In this paper, the 1D minimization with respect to learning rate has been incorporated into backpropagation algorithm. Various techniques of a 1D optimization have been developed to adjust the order of learning rate during training, such as Goldstein method, Wolfe-Powell method, and dichotomy method. These methods under the backpropagation algorithm are used to learn forward and inverse kinematic equations of two degrees of freedom arm robot manipulator. The comparative study presented in this paper compares by simulation these different methods with respectively the standard backpropagation algorithm and the optimal gradient method. The simulation results show that the gradient method combined with the Goldstein or Wolfe-Powell method gives the best performance and faster minimization of the criterion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call