Abstract

The most widely used algorithm for training multilayer feedforward neural networks is backpropagation. Back-propagation is an iterative gradient descent algorithm. Since its appearance, various methods which modify the conventional BP have been created to improve its efficiency. One such algorithm which uses an adaptive learning rate is backpropagation with variable stepsize, is proposed. Parallel tangent methods are used in global optimization to modify and improve the simple gradient descent algorithm by using from time to time the difference between the current point and the point before two steps as a search direction, instead of using the gradient. In this study, we investigate the combination of the BPVS method with the parallel tangent approach for neural network training. We perform experimental results on well-known test problems to evaluate the efficiency of the method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.