Abstract

Backpropagation (BP) algorithm is the typical strategy to train the feedforward neural networks (FNNs). Gradient descent approach is the popular numerical optimal method which is employed to implement the BP algorithm. However, this technique frequently leads to poor generalization and slow convergence. Inspired by the sparse response character of human neuron system, several sparse-response BP algorithms were developed which effectively improve the generalization performance. The essential idea is to impose the responses of hidden layer as a specific L1 penalty term on the standard error function of FNNs. In this paper, we mainly focus on the two remaining challenging tasks: one is to solve the non-differential problem of the L1 penalty term by introducing smooth approximation functions. The other aspect is to provide a rigorous convergence analysis for this novel sparse response BP algorithm. In addition, an illustrative numerical simulation has been done to support the theoretical statement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.