Contingency is a critical concept for theories of associative learning and the assignment of credit problem in reinforcement learning. Measuring and manipulating it has, however, been problematic. The information-theoretic definition of contingency-normalized mutual information-makes it a readily computed property of the relation between reinforcing events, the stimuli that predict them and the responses that produce them. When necessary, the dynamic range of the required temporal representation divided by the Weber fraction gives a psychologically realistic plug-in estimates of the entropies. There is no measurable prospective contingency between a peck and reinforcement when pigeons peck on a variable interval schedule of reinforcement. There is, however, a perfect retrospective contingency between reinforcement and the immediately preceding peck. Degrading the retrospective contingency by gratis reinforcement reveals a critical value (.25), below which performance declines rapidly. Contingency is time scale invariant, whereas the perception of proximate causality depends-we assume-on there being a short, fixed psychologically negligible critical interval between cause and effect. Increasing the interval between a response and reinforcement that it triggers degrades the retrograde contingency, leading to a decline in performance that restores it to at or above its critical value. Thus, there is no critical interval in the retrospective effect of reinforcement. We conclude with a short review of the broad explanatory scope of information-theoretic contingencies when regarded as causal variables in conditioning. We suggest that the computation of contingencies may supplant the computation of the sum of all future rewards in models of reinforcement learning. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Read full abstract