Abstract

Hard-threshold nonlinearities are of significant interest for neural-network information processing due to their simplicity and low-cost implementation. They however lack an important differentiability property. Here, hard-threshold nonlinearities receiving assistance from added noise are pooled into a large-scale summing array to approximate a neuron with a noise-smoothed activation function. Differentiability that facilitates gradient-based learning is restored for such neurons, which are assembled into a feed-forward neural network. The added noise components used to smooth the hard-threshold responses have adjustable parameters that are adaptively optimized during the learning process. The converged non-zero optimal noise levels establish a beneficial role for added noise in operation of the threshold neural network. In the retrieval phase the threshold neural network operating with non-zero optimal added noise, is tested for data classification and for handwritten digit recognition, which achieves state-of-the-art performance of existing backpropagation-trained analog neural networks, while requiring only simpler two-state binary neurons.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.