Abstract

Recurrent multilayer network structures and Hebbian learning are two essential features of biological neural networks. An artificial recurrent multilayer neural network that performs supervised Hebbian learning, called probabilistic associative memory (PAM), was recently proposed. PAM is a recurrent multilayer network of processing units (PUs), each processing unit comprising a group of novel artificial neurons, which generate spike trains. PUs are detectors and recognizers of the feature subvectors appearing in their receptive fields. In supervised learning by a PU, the label of the feature subvector is provided from outside PAM. Since the feature subvector may be shared by many causes and may contain parts from many causes, the label of the feature subvector is sometimes difficult to obtain, not to mention the cost, especially if there are many hidden layers and feedbacks. This paper presents an unsupervised learning scheme, which is Hebbian in the following sense: The strength of a synapse increases if the outputs of the presynaptic and postsynaptic neurons are identical and decreases otherwise. This unsupervised Hebbian learning capability makes PAM a good functional model of neuronal networks as well as a good learning machine for temporal hierarchical pattern recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.