Abstract

Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman & Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity, such that even paradigms with very high e-complexity are relatively easy to learn so long as they have low i-complexity. While this would potentially explain why languages are able to maintain large paradigms, recent work by Johnson et al. (submitted) suggests that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Here we will build on this work, reporting a series of experiments under more realistic learning conditions which confirm that indeed, across a range of paradigms that vary in either e- or i-complexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity at all. Further, analysis of a large number of randomly generated paradigms show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity.These findings suggest that the observations made by Ackerman & Malouf (2013) for natural language paradigms may stem from the nature of these measures rather than learning pressures specially attuned to i-complexity.

Highlights

  • Languages differ widely in their morphological systems, including substantial variation in their inflectional paradigms; some languages do not use morphology to mark grammatical information at all (e.g. Mandarin) whereas others make use of inflectional morphology to mark dozens of grammatical functions (e.g. Arabic)

  • Results from simulations with Long Short Term Memory (LSTM) neural networks and behavioural experiments with human learners both suggest that e-complexity has a robust effect on learning of inflectional paradigms

  • We test the learnability of this set of 1000 paradigms with LSTM neural networks to show how these two measures relate to learning across a wider range of paradigms than we covered in Experiments 1 and 2

Read more

Summary

Introduction

Languages differ widely in their morphological systems, including substantial variation in their inflectional paradigms; some languages do not use morphology to mark grammatical information at all (e.g. Mandarin) whereas others make use of inflectional morphology to mark dozens of grammatical functions (e.g. Arabic). In addition to the number of inflectional categories, the size of a morphological system is impacted by the number of inflection classes, i.e. different realizations for the same morphosyntactic or morphosemantic distinction across groups of lexemes (Aronoff 1994; Corbett 2009), which has been claimed to be a source of complexity in morphological systems (e.g. Baerman et al 2010; Ackerman and Malouf 2013). These aspects of morphological complexity, which pertain to the size of a morphological sys- [ 98 ]. The nominative singular -o is predictive of all the other case forms (i.e. if you know that a given noun takes -o in the nominative singular you can predict its inflection in any other combination of case and number); in contrast, the nominative plural -i is less predictive, since nouns which take that inflection show variation in inflectional marking elsewhere

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.