Abstract

Because of the hierarchical organization of natural languages, words that are syntactically related are not always linearly adjacent. For example, the subject and verb in the child always runs agree in person and number, although they are not adjacent in the sequences of words. Since such dependencies are indicative of abstract linguist structure, it is of significant theoretical interest how these relationships are acquired by language learners. Most experiments that investigate nonadjacent dependency (NAD) learning have used artificial languages in which the to-be-learned dependencies are isolated, by presenting the minimal sequences that contain the dependent elements. However, dependencies in natural language are not typically isolated in this way. We report the first demonstration to our knowledge of successful learning of embedded NADs, in which silences do not mark dependency boundaries. Subjects heard passages of English with a predictable structure, interspersed with passages of the artificial language. The English sentences were designed to induce boundaries in the artificial languages. In Experiment 1 & 3 the artificial NADs were contained within the induced boundaries and subjects learned them, whereas in Experiment 2 & 4, the NADs crossed the induced boundaries and subjects did not learn them. We take this as evidence that sentential structure was "carried over" from the English sentences and used to organize the artificial language. This approach provides several new insights into the basic mechanisms of NAD learning in particular and statistical learning in general. (PsycINFO Database Record

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call