Abstract

The drawback of the Back Propagation (BP) algorithm is slow training and easily convergence to the local minimum and suffers from saturation training. To overcome those problems, we created a new dynamic function for each training rate and momentum term. In this study, we presented the (BPDRM) algorithm, which training with dynamic training rate and momentum term. Also in this study, a new strategy is proposed, which consists of multiple steps to avoid inflation in the gross weight when adding each training rate and momentum term as a dynamic function. In this proposed strategy, fitting is done by making a relationship between the dynamic training rate and the dynamic momentum. As a result, this study placed an implicit dynamic momentum term in the dynamic training rate. This αdmic = ቀ ଵ ఎ೏೘೔೎ ቁ . This procedure kept the weights as moderate as possible (not to small or too large). The 2-dimensional XOR problem and buba data were used as benchmarks for testing the effects of the 'new strategy'. All experiments were performed on Matlab software (2012a). From the experiment's results, it is evident that the dynamic BPDRM algorithm provides a superior performance in terms of training and it provides faster training compared to the (BP) algorithm at same limited error.

Highlights

  • The Back Propagation (BP) algorithm is commonly used in robotics, automation and Global positioning System (GPS) (Thiang and Pangaldus, 2009; Tieding et al, 2009)

  • We proposed a new strategy, which consists of two steps to avoid inflation in the gross weight when added for each training rate and momentum term as a dynamic function

  • We proposed a new strategy to avoid the gross weight of the fitting producer by creating a relationship between the dynamic training rate and the dynamic momentum, so we placed an implicit momentum function in the training rate αdmic = f, which was defined as the implicit training rate proposed in Eq 2

Read more

Summary

Introduction

The Back Propagation (BP) algorithm is commonly used in robotics, automation and Global positioning System (GPS) (Thiang and Pangaldus, 2009; Tieding et al, 2009). The back propagation algorithm led to a tremendous breakthrough in the application of multilayer perceptions (Moalem and Ayoughi, 2010, Oh and Lee, 1995). It has been applied successfully in applications in many areas and it has an efficient training algorithm for multilayer perception (Iranmanesh and Mahdavi, 2009). Gradient descent is commonly used to adjust the weight through the change training errors, but the gradient descent is not guaranteed to find the global minimum error, because training is slow and converges to the local minimum (Kotsiopoulos and Grapsa, 2009, Nand et al, 2012, Shao and Zheng, 2009, Zhang, 2010). Stuck at a local minimum when Or, the output training of hidden layers and Or, the output training of output layer, extremely approaches 1 or 0 (Dai and Liu, 2012, Shao and Zheng, 2009, Zakaria et al, 2010)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.