Abstract

In two-alternative discrimination tasks, experimenters usually randomize the location of the rewarded stimulus so that systematic behavior with respect to irrelevant stimuli can only produce chance performance on the learning curves. One way to achieve this is to use random numbers derived from a discrete binomial distribution to create a 'full random training schedule' (FRS). When using FRS, however, sporadic but long laterally-biased training sequences occur by chance and such 'input biases' are thought to promote the generation of laterally-biased choices (i.e., 'output biases'). As an alternative, a 'Gellerman-like training schedule' (GLS) can be used. It removes most input biases by prohibiting the reward from appearing on the same location for more than three consecutive trials. The sequence of past rewards obtained from choosing a particular discriminative stimulus influences the probability of choosing that same stimulus on subsequent trials. Assuming that the long-term average ratio of choices matches the long-term average ratio of reinforcers, we hypothesized that a reduced amount of input biases in GLS compared to FRS should lead to a reduced production of output biases. We compared the choice patterns produced by a 'Rational Decision Maker' (RDM) in response to computer-generated FRS and GLS training sequences. To create a virtual RDM, we implemented an algorithm that generated choices based on past rewards. Our simulations revealed that, although the GLS presented fewer input biases than the FRS, the virtual RDM produced more output biases with GLS than with FRS under a variety of test conditions. Our results reveal that the statistical and temporal properties of training sequences interacted with the RDM to influence the production of output biases. Thus, discrete changes in the training paradigms did not translate linearly into modifications in the pattern of choices generated by a RDM. Virtual RDMs could be further employed to guide the selection of proper training schedules for perceptual decision-making studies.

Highlights

  • In typical two-alternative discrimination tasks, subjects are required to make choices between two options offered [1,2,3,4]

  • We found that, the Gellerman-like training schedule' (GLS) presented fewer input biases than the full random training schedule' (FRS), the virtual Rational Decision Maker' (RDM) produced more steady-state output biases with GLS than with FRS under a variety of test conditions

  • It is clear that FRS contains an implicit source of input biases, we do not understand exactly how switching from FRS to GLS affects the production of output biases produced by a simple 'Rational Decision Maker' (RDM; see equations below)

Read more

Summary

Introduction

In typical two-alternative discrimination tasks, subjects are required to make choices between two options offered [1,2,3,4]. We compared the choice patterns produced by a virtual 'Rational Decision Maker' (RDM) in response to computer-generated FRS and GLS training sequences. By assuming that choices comply with matching behavior, we hypothesized that the reduced number of input biases in GLS would decrease the production of output biases compared to FRS.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call