Abstract

This study examined how operant behavior adapted to an abrupt but regular change in the timing of reinforcement. Pigeons were trained on a fixed interval (FI) 15-s schedule of reinforcement during half of each experimental session, and on an FI 45-s (Experiment 1), FI 60-s (Experiment 2), or extinction schedule (Experiment 3) during the other half. FI performance was well characterized by a mixture of two gamma-shaped distributions of responses. When a longer FI schedule was in effect in the first half of the session (Experiment 1), a constant interference by the shorter FI was observed. When a shorter FI schedule was in effect in the first half of the session (Experiments 1, 2, and 3), the transition between schedules involved a decline in responding and a progressive rightward shift in the mode of the response distribution initially centered around the short FI. These findings are discussed in terms of the constraints they impose to quantitative models of timing, and in relation to the implications for information-based models of associative learning.This article is part of a Special Issue entitled: Associative and Temporal Learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.