Abstract
How language users become able to produce forms they have never encountered in input is central to our understanding of language cognition. A range of models, including rule-based models, analogy-based models and stochastic models have been proposed to account for this ability. Despite the fact that all three models are reasonably successful, we argue that productivity is more accurately captured through learnability than by rules or probabilities. Using a combination of computational modelling and behavioural experimentation we show that the basic principle of error-driven learning allows language users extract the relevant patterns. These patterns are found at a level that cuts across phonology and morphology and is not considered by mainstream approaches to language. Our findings thus highlight how a learning-based approach constrains our inferences about the types of structures that should be targeted on a cognitively realistic account of language representation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.