Abstract

In this paper we probe the interaction between sequential and hierarchical learning by investigating implicit learning in a group of school-aged children. We administered a serial reaction time task, in the form of a modified Simon Task in which the stimuli were organised following the rules of two distinct artificial grammars, specifically Lindenmayer systems: the Fibonacci grammar (Fib) and the Skip grammar (a modification of the former). The choice of grammars is determined by the goal of this study, which is to investigate how sensitivity to structure emerges in the course of exposure to an input whose surface transitional properties (by hypothesis) bootstrap structure. The studies conducted to date have been mainly designed to investigate low-level superficial regularities, learnable in purely statistical terms, whereas hierarchical learning has not been effectively investigated yet. The possibility to directly pinpoint the interplay between sequential and hierarchical learning is instead at the core of our study: we presented children with two grammars, Fib and Skip, which share the same transitional regularities, thus providing identical opportunities for sequential learning, while crucially differing in their hierarchical structure. More particularly, there are specific points in the sequence (k-points), which, despite giving rise to the same transitional regularities in the two grammars, support hierarchical reconstruction in Fib but not in Skip. In our protocol, children were simply asked to perform a traditional Simon Task, and they were completely unaware of the real purposes of the task. Results indicate that sequential learning occurred in both grammars, as shown by the decrease in reaction times throughout the task, while differences were found in the sensitivity to k-points: these, we contend, play a role in hierarchical reconstruction in Fib, whereas they are devoid of structural significance in Skip. More particularly, we found that children were faster in correspondence to k-points in sequences produced by Fib, thus providing an entirely new kind of evidence for the hypothesis that implicit learning involves an early activation of strategies of hierarchical reconstruction, based on a straightforward interplay with the statistically-based computation of transitional regularities on the sequences of symbols.

Highlights

  • Artificial grammar learning (AGL) is an experimental paradigm employed to investigate how sequences of symbols produced by a system are learnt, as well as to assess implicit learning, i.e. learning that occurs incidentally, without explicit awareness of what has been learnt

  • We focus on the technical aspects of the Lindenmayer systems, and in particular of the two grammars that we employed in our experimental protocol, the Fibonacci grammar and its non-trivial modification Skip

  • Since accuracy rates turned out to be at ceiling, especially for congruent trials, and almost constant across blocks, with no significant differences, we present here only the results of the statistical analyses concerning reaction times (RTs)

Read more

Summary

Introduction

Artificial grammar learning (AGL) is an experimental paradigm employed to investigate how sequences of symbols produced by a system are learnt, as well as to assess implicit learning, i.e. learning that occurs incidentally, without explicit awareness of what has been learnt. An artificial grammar is characterised by a finite alphabet of symbols and a finite set of rules, which, applying to these symbols, produce specific strings. We start by providing a short introduction to AGL, discussing the most important findings that have been reported using this methodology, as well as their major weaknesses. We focus on the technical aspects of the Lindenmayer systems, and in particular of the two grammars that we employed in our experimental protocol, the Fibonacci grammar and its non-trivial modification Skip. We discuss our experimental results, elucidating some of the non-trivial theoretical consequences that these results suggest

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call