Abstract

Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners' AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems' complexity when experimenting with and evaluating the findings of AGL studies.

Highlights

  • Artificial grammar learning (AGL) refers to an experimental approach that explores pattern recognition in a set of structured sequences, typically comprising strings of alphabetical letters

  • The present study focused on computerizing Bollt and Jones’s (2000) technique of calculating topological entropy (TE), a quantitative measure of artificial grammar learning (AGL) charts’ complexity, with the aim of examining associations between grammar systems’ TE and learners’ AGL task performance

  • As explained in detail in Appendix B in Supplementary Material, we extended the code used in Bailey and Pothos’s (2008) StimSelect software to uniformly calculate grammatical complexity for various AGL charts used in many prior research studies, enabling meta-analysis of learner performance in previous investigations of artificial grammar tasks based on different grammars

Read more

Summary

Introduction

Artificial grammar learning (AGL) refers to an experimental approach that explores pattern recognition in a set of structured sequences, typically comprising strings of alphabetical letters. Such experiments include a training phase and a testing phase (Reber, 1967, 1969). Various theories have been debated to explain what characterizes the learning process and what is acquired during AGL training sessions, including the probabilistic learning approach (Reber, 1967), the exemplarbased learning approach (Brooks and Vokey, 1991), and a third approach suggesting that learners’ acquisition of abstract rules during the training phase enables them to judge the sequences at the testing phase (Redington and Chater, 1996; Pothos, 2010). The AGL paradigm has been proposed as a model for language or syntax acquisition, but questions remain as to how broadly it applies to the tasks faced by language learners (Marcus et al, 1995; Pena et al, 2002; Endress and Bonatti, 2007; Aslin and Newport, 2008)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call