Abstract

The selection of the initial network weights has been a known key aspect affecting the convergence of sigmoidal activation function-based artificial neural networks. In this paper, a new network initialization scheme has been proposed that initializes the network weights such that activation functions in the network are not saturated initially. The proposed method ensures that the initial outputs of the hidden neurons are in the active region which positively impacts the network’s rate of convergence. Unlike most of the earlier proposed initialization schemes, this method does not depend on architectural parameters like the size of the input layer or the hidden layer. The performance of the proposed scheme has been compared against eight well-known weight initialization routines over six benchmark real-world problems. Results show that the proposed weight initialization routine enables the network to achieve better performance within the same count of network training epochs. A right-tailed t-test also shows that our proposed scheme is significantly better in most of the cases against the other techniques or statistically similar in a few cases but never underperforms. Hence, it may be considered as a strong alternative to the conventional neural network initialization techniques.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.