Abstract

Regularization occurs when the output a learner produces is less variable than the linguistic data they observed. In an artificial language learning experiment, we show that there exist at least two independent sources of regularization bias in cognition: a domain-general source based on cognitive load and a domain-specific source triggered by linguistic stimuli. Both of these factors modulate how frequency information is encoded and produced, but only the production-side modulations result in regularization (i.e. cause learners to eliminate variation from the observed input). We formalize the definition of regularization as the reduction of entropy and find that entropy measures are better at identifying regularization behavior than frequency-based analyses. Using our experimental data and a model of cultural transmission, we generate predictions for the amount of regularity that would develop in each experimental condition if the artificial language were transmitted over several generations of learners. Here we find that the effect of cognitive constraints can become more complex when put into the context of cultural evolution: although learning biases certainly carry information about the course of language evolution, we should not expect a one-to-one correspondence between the micro-level processes that regularize linguistic datasets and the macro-level evolution of linguistic regularity.

Highlights

  • Languages evolve as they pass from one mind to another

  • Recent experimental research has found domain-general learning mechanisms underpin many aspects of language learning (Saffran & Thiessen, 2007), such as the statistical learning involved in word segmentation by infants (Saffran, Aslin, & Newport, 1996) and how memory constraints modulate learners’ productions of probabilistic variation in language (Hudson Kam & Chang, 2009)

  • This paper offers a first attempt to quantify the relative contribution of domain-general and domain-specific learning mechanisms to linguistic regularization behavior

Read more

Summary

Introduction

Immersed in a world of infinite variation, our cognitive architecture constrains what we can perceive, process, and produce. Cognitive constraints, such as learning biases, shape languages as they evolve and can help to explain the structure of language (Bever, 1970; Slobin, 1973; Newport, 1988; Newport, 2016; Christiansen & Chater, 2008; Christiansen & Chater, 2016; Culbertson, Smolensky, & Legendre, 2012; Kirby, Griffiths, & Smith, 2014). It is likely that a mixture of domain-general and domain-specific mechanisms are involved in language learning

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call