Abstract

An implementation of punishment in the evolutionary theory of behavior dynamics is proposed, and is applied to responding on concurrent schedules of reinforcement with superimposed punishment. In this implementation, punishment causes behaviors to mutate, and to do so with a higher probability in a lean reinforcement context than in a rich one. Computational experiments were conducted in an attempt to replicate three findings from experiments with live organisms. These are (1) when punishment is superimposed on one component of a concurrent schedule, response rate decreases in the punished component and increases in the unpunished component, (2) when punishment is superimposed on both components at equal scheduled rates, preference increases over its no-punishment baseline, and (3) when punishment is superimposed on both components at rates that are proportional to the scheduled rates of reinforcement, preference remains unchanged from the baseline preference. Artificial organisms animated by the theory, and working on concurrent schedules with superimposed punishment, reproduced all of these findings. Given this outcome, it may be possible to discover a steady-state mathematical description of punished choice in live organisms by studying the punished choice behavior of artificial organisms animated by the evolutionary theory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call