Abstract

Four groups of subjects were given either 0. 100, 500, or 1,000 msec delays of the unconditioned stimulus (UCS) contingent upon the occurrence of a conditioned response (CR) and were given a UCS 515 msec after conditioned stimulus (CS) onset when a CR did not occur. A fifth group received standard classical conditioning trials with an interstimulus interval of 515 msec. Overall performance decreased as CR-contingent UCS delay increased, with the classical conditioning group approximating the performance of the group receiving the 100-msec delay. The data were analyzed with the two-phase model of conditioning and the following results were obtained: The duration of Phase 1 of the model increased with contingent delay; operator limits associated with CR trials or with combined CR-CR (CR absent) trials decreased as a function of delay; and operator limits associated exclusively with CR trials were unaffected by the delay. Subjects receiving a contingent delay of 0 msec gave the shortest latency responses and exhibited reliable latency decreases across trials, suggesting an attempt to "beat" the UCS. The results were interpreted as contrary to what would be expected from low-of-effect theories which postulate that reinforcement results from a CR-UCS interaction, although they could be subsumed under a drive or an associative strength theory in which the aversive, or CR-supportive, strength of the UCS is assumed to be negatively correlated with contingent UCS delay.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call