A central goal of research into language acquisition is explaining how, when learners generalize to new cases, they appropriately restrict their generalizations (e.g., to avoid producing ungrammatical utterances such as *the clown laughed the man; "*" indicates an ungrammatical form). The past 30 years have seen an unresolved debate between statistical preemption and entrenchment as explanations. Under preemption, the use of a verb in a particular construction (e.g., *the clown laughed the man) is probabilistically blocked by hearing that other verb constructions with similar meanings only (e.g., the clown made the man laugh). Under entrenchment, such errors (e.g., *the clown laughed the man) are probabilistically blocked by hearing any utterance that includes the relevant verb (e.g., by the clown made the man laugh and the man laughed). Across five artificial-language-learning studies, we designed a training regime such that learners received evidence for the (by the relevant hypothesis) ungrammaticality of a particular unattested verb/noun + particle combination (e.g., *chila + kem; *squeako + kem) via either preemption only or entrenchment only. Across all five studies, participants in the preemption condition (as per our preregistered prediction) rated unattested verb/noun + particle combinations as less acceptable for restricted verbs/nouns, which appeared during training, than for unrestricted, novel-at-test verbs/nouns, which did not appear during training, that is, strong evidence for preemption. Participants in the entrenchment condition showed no evidence for such an effect (and in 3/5 experiments, positive evidence for the null). We conclude that a successful model of learning linguistic restrictions must instantiate competition between different forms only where they express the same (or similar) meanings. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Read full abstract