Abstract

Abstract Morphophonological alternations often involve dependencies between adjacent segments. Despite the apparent distance between relevant segments in the alternations that arise in consonant and vowel harmony, these dependencies can usually be viewed as adjacent on a tier representation. However, the tier needed to render dependencies adjacent varies crosslinguistically, and the abstract nature of tier representations in comparison to flat, string-like representations has led phonologists to seek justification for their use in phonological theory. In this paper, I propose a learning-based account of tier-like representations. I argue that humans show a proclivity for tracking dependencies between adjacent items, and propose a simple learning algorithm that incorporates this proclivity by tracking only adjacent dependencies. The model changes representations in response to being unable to predict the surface form of alternating segments—a decision governed by the Tolerance Principle, which allows for learning despite the sparsity and exceptions inevitable in naturalistic data. Tier-like representations naturally emerge from this learning procedure, and, when trained on small amounts of natural language data, the model achieves high accuracy generalizing to held-out test words, while flexibly handling cross-linguistic complexities like neutral segments and blockers. The model also makes precise predictions about human generalization behavior, and these are consistently borne out in artificial language experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call