Abstract

In order to improve the generalization ability of feed-forward neural networks, a new objective function of learning procedure for training single hidden layer network is proposed. This objective function is composed of two information entropy, one is the cross entropy as the main optimization term and the other is the fuzzy entropy as the regularization term. In this paper, we are fused the concept of entropy to the network training process by the regularization method. We also derive the new learning rule of neural network. Our experimental results show that the generalization ability of networks by the proposed algorithm is better than other well-known learning methods in the same time complexity.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.