IntroductionImplicit statistical learning is, by definition, learning that occurs without conscious awareness. However, measures that putatively assess implicit statistical learning often require explicit reflection, for example, deciding if a sequence is ‘grammatical’ or ‘ungrammatical’. By contrast, ‘processing-based’ tasks can measure learning without requiring conscious reflection, by measuring processes that are facilitated by implicit statistical learning. For example, when multiple stimuli consistently co-occur, it is efficient to ‘chunk’ them into a single cognitive unit, thus reducing working memory demands. Previous research has shown that when sequences of phonemes can be chunked into ‘words’, participants are better able to recall these sequences than random ones. Here, in two experiments, we investigated whether serial visual recall could be used to effectively measure the learning of a more complex artificial grammar that is designed to emulate the between-word relationships found in language.MethodsWe adapted the design of a previous Artificial Grammar Learning (AGL) study to use a visual serial recall task, as well as more traditional reflection-based grammaticality judgement and sequence completion tasks. After exposure to “grammatical” sequences of visual symbols generated by the artificial grammar, the participants were presented with novel testing sequences. After a brief pause, participants were asked to recall the sequence by clicking on the visual symbols on the screen in order.ResultsIn both experiments, we found no evidence of artificial grammar learning in the Visual Serial Recall task. However, we did replicate previously reported learning effects in the reflection-based measures.DiscussionIn light of the success of serial recall tasks in previous experiments, we discuss several methodological factors that influence the extent to which implicit statistical learning can be measured using these tasks.
Read full abstract