Abstract

Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences. A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations. We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies. We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced. We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.

Highlights

  • In order to achieve linguistic proficiency, language learners must identify words from continuous speech, and work out the relations between those words, in terms of determining grammatical categories and syntactic structures

  • Learning must operate by somehow determining the regularities that are evident within the language, and how these regularities relate to meaning in terms of defining the relations between words and their mapping to intended referents in the environment (Cunillera, Laine, Camara, & Rodriguez-Fornells, 2010; Monaghan & Mattock, 2012)

  • One perspective is that similar statistical mechanisms may apply to speech segmentation and to grammatical processing (Perruchet, Tyler, Galland, & Peereman, 2004; Romberg & Saffran, 2010)

Read more

Summary

Introduction

In order to achieve linguistic proficiency, language learners must identify words from continuous speech, and work out the relations between those words, in terms of determining grammatical categories and syntactic structures. Peña et al (2002) suggested that adults are capable of using statistics to identify words from a continuous speech stream, they may apply separate computations that do not depend on learning statistical dependencies between particular elements of the language, to generalise the structure to consistent forms They suggest that this can occur only once the task of identifying the words in the stimuli has been solved (Chomsky, 1957; Endress & Bonatti, 2007; Marchetto & Bonatti, in press; Marcus et al, 1999; Miller & Chomsky, 1963). If segmentation and structural generalisation are separable processes, we expect to see a null effect for the novel syllable generalisation task, with similar performance to that seen in Peña et al.’s (2002) original study

Participants
Design
Stimuli
Training
Testing
Procedure
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call