Abstract

In Neural Network (NN) training, local minimum is an integrated problem. In this paper, a modification of standard backpropagation (BP) algorithm, called backpropagation with vector chaotic learning rate (BPVL) is proposed to improve the performance of NNs. BPVL method generates a chaotic time series as Vector form of Mackey Glass and logistic map. A rescaled version of these series is used as learning rate (LR). In BP training the weights of NN become inactive, after arrival of local minima in the training session. Using integrated chaotic learning rate, the weight update accelerated in the local minimum region. BPVL is tested on six real world benchmark classification problems such as breast cancer, diabetes, heart disease, australian credit card, horse and glass. The proposed BPVL outperforms the existing BP and BPCL in terms of generalization ability and also convergence rate.

Highlights

  • Gradient based methods are one of the most widely used error minimization methods used to train back propagation networks

  • Improving the training efficiency of neural network based algorithm is an active area of research and numerous papers have been proposed in the literature

  • When RLR is -0.03 to 0.15, testing error rate’ (TER) and iteration of Backpropagation with Vector Chaotic Learning Rate’ (BPVL) are 11.4489(dB) and 22.40, while these are 11.8011(dB) and 41.90 for standard BP 11.5776(dB) & 31.80 for BPCL respectively. These results ensure that the generalization ability of BPVL is better than that of BP & BPCL for credit card problem

Read more

Summary

INTRODUCTION

Gradient based methods are one of the most widely used error minimization methods used to train back propagation networks. Improving the training efficiency of neural network based algorithm is an active area of research and numerous papers have been proposed in the literature. Many modifications that have been proposed to improve the performance of BP have focused on solving “flat spot” [7] problem to increase the generalization ability. Cheung enhanced this approach by dividing the learning process into multiple phases, and different fast learning algorithms are assigned in different phases to improve the convergence rate [15] All these methods require much computational effort and these do not guarantee good generalization ability for all the cases. A modified BP algorithm, called ‘Backpropagation with Vector Chaotic Learning Rate’ (BPVL) is proposed which is a vector form of Mackey glass and Logistic map chaotic time series. If the termination condition is fulfilled, the training is stopped and the trained NN is tested; otherwise go to step 2

Characteristics of Benchmark datasets
Experimental Process and Comparison
40 Iter6a0tion 80
DISCUSSION
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.