Abstract

In this paper, an efficient weight initialization method is proposed using Cauchy’s inequality based on sensitivity analy- sis to improve the convergence speed in single hidden layer feedforward neural networks. The proposed method ensures that the outputs of hidden neurons are in the active region which increases the rate of convergence. Also the weights are learned by minimizing the sum of squared errors and obtained by solving linear system of equations. The proposed method is simulated on various problems. In all the problems the number of epochs and time required for the proposed method is found to be minimum compared with other weight initialization methods.

Highlights

  • The error backpropagation method has been greatly used for the supervised training of feedforward neural networks (FNN)

  • Many techniques have been proposed to speed up this method, such as second order algorithms [1,2], adaptive step size method [3,4], least squares method [5,6,7] and appropriate weight initialization method [7,8,9]

  • Even though the time complexity of the proposed method is O(n2), it reaches the minimum error in minimum time because it involves linear system of equations and it is ensured that the outputs of hidden neurons are in the active region before finding the weights for each layer

Read more

Summary

Introduction

The error backpropagation method has been greatly used for the supervised training of feedforward neural networks (FNN). Shimodaira [10] has proposed a weight initialization method (OIVS) based on geometical considerations to improve the learning performance of the backpropagation algorithm in neural networks. This method is based on the equations representing the characteristics of the information transformation mechanism of a node. Drago and Ridella [8] have proposed a method called SCAWI to improve the performance of the backpropagation algorithm In this method, the authors use the concept of “Paralyzed neuron percentage” (PNP) which describes how many times a neuron is in a saturated state and the magnitude of atleast one output error is high

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.