Abstract

A neuronal model of classical conditioning is proposed. The model is most easily described by contrasting it with a still influential neuronal model first analyzed by Hebb (1949). It is proposed that the Hebbian model be modified in three ways to yield a model more in accordance with animal learning phenomena. First, instead of correlating pre- and postsynaptic levels of activity, changes in pre- and postsynaptic levels of activity should be correlated to determine the changes in synaptic efficacy that represent learning. Second, instead of correlating approximately simultaneous pre- and postsynaptic signals, earlier changes in presynaptic signals should be correlated with later changes in postsynaptic signals. Third, a change in the efficacy of a synapse should be proportional to the current efficacy of the synapse, accounting for the initial positive acceleration in the S-shaped acquisition curves observed in animal learning. The resulting model, termed a drive-reinforcement model of single neuron function, suggests that nervous system activity can be understood in terms of two classes of neuronal signals: drives that are defined to be signal levels and reinforcers that are defined to be changes in signal levels. Defining drives and reinforcers in this way, in conjunction with the neuronal model, suggests a basis for a neurobiological theory of learning. The proposed neuronal model is an extension of the Sutton-Barto (1981) model, which in turn can be seen as a temporally refined extension of the Rescorla-Wagner (1972) model. It is shown that the proposed neuronal model predicts a wide range of classical conditioning phenomena, including delay and trace conditioning, conditioned and unconditioned stimulus duration and amplitude effects, partial reinforcement effects, interstimulus interval effects, second-order conditioning, conditioned inhibition, extinction, reacquisition effects, backward conditioning, blocking, overshadowing, compound conditioning, and discriminative stimulus effects. The neuronal model also eliminates some inconsistencies with the experimental evidence that occur with the Rescorla-Wagner and Sutton-Barto models. Implications of the neuronal model for animal learning theory, connectionist and neural network modeling, artificial intelligence, adaptive control theory, and adaptive signal processing are discussed. It is concluded that real-time learning mechanisms that do not require evaluative feedback from the environment are fundamental to natural intelligence and may have implications for artificial intelligence. Experimental tests of the model are suggested.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.