Abstract

Humans are capable of detecting and exploiting a variety of environmental regularities, including stimulus-stimulus contingencies (e.g., visual statistical learning) and stimulus-reward contingencies. However, the relationship between these two types of learning is poorly understood. In two experiments, we sought evidence that the occurrence of rewarding events enhances or impairs visual statistical learning. Across all of our attempts to find such evidence, we employed a training stage during which we grouped shapes into triplets and presented triplets one shape at a time in an undifferentiated stream. Participants subsequently performed a surprise recognition task in which they were tested on their knowledge of the underlying structure of the triplets. Unbeknownst to participants, triplets were also assigned no-, low-, or high-reward status. In Experiments 1A and 1B, participants viewed shape streams while low and high rewards were “randomly” given, presented as low- and high-pitched tones played through headphones. Rewards were always given on the third shape of a triplet (Experiment 1A) or the first shape of a triplet (Experiment 1B), and high- and low-reward sounds were always consistently paired with the same triplets. Experiment 2 was similar to Experiment 1, except that participants were required to learn value associations of a subset of shapes before viewing the shape stream. Across all experiments, we observed significant visual statistical learning effects, but the strength of learning did not differ amongst no-, low-, or high-reward conditions for any of the experiments. Thus, our experiments failed to find any influence of rewards on statistical learning, implying that visual statistical learning may be unaffected by the occurrence of reward. The system that detects basic stimulus-stimulus regularities may operate independently of the system that detects reward contingencies.

Highlights

  • At every moment, human cognition faces the complex task of interpreting and responding to an overwhelming amount of stimulation

  • To examine strength of evidence favoring the null hypothesis, we applied a Bayesian repeated-measures ANOVA these data using the JASP software project (Love et al, 2015), with default priors (Rouder et al, 2012). This analysis compares models that include versus do not include each factor and interaction, producing a Bayes Factor (BF) ratio that indicates the evidence in favor of the null model compared with evidence favoring a model that includes the factor or interaction in question

  • Similar to our findings from the first experiment, visual statistical learning was generally evident across all conditions, but there appeared to be no clear effect of the value association upon learning, nor was there any effect of pre-exposure of some constituent shapes from the reward-learning phase of the experiment. These results suggest that concurrently presented rewards and previously learned stimulus−reward associations have no impact on visual statistical learning

Read more

Summary

Introduction

Human cognition faces the complex task of interpreting and responding to an overwhelming amount of stimulation. Repeatedly experiencing one phoneme that reliably predicts another (Saffran et al, 1996), or particular visual items that reliably co-occur in time or space with others (Fiser and Aslin, 2001, 2002), Visual Statistical Learning and Reward can lead to above-chance recognition rates of those regularities This kind of statistical learning is available to us from a time shortly after birth and throughout adulthood (Saffran et al, 1996; Saffran et al, 1999), and such learning spans perceptual systems (Glicksohn and Cohen, 2013), allowing humans to automatically detect and learn rich probabilistic relationships common within real-world environments. Statistical learning can be more complex, simple stimulus−stimulus associative relationships are an important (and most commonly studied) component of statistical learning, and these relationships can apparently be detected and learned without observers’ intentions or awareness (Turk-Browne et al, 2005)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call