Abstract
Indexed constraints (like cophonologies) increase a grammar’s fit to seen data, but do they hurt the grammar’s ability to generalize to unseen data? We focus on French schwa deletion, an optional process whose rate of application is modulated by both phonological and lexical factors, and we propose three indexed constraint learners in the Maximum Entropy (MaxEnt) framework. Using data from Racine (2008), we test the ability of four learners to capture existing patterns and generalize to unseen data: three learners and a control MaxEnt learner without indexed constraint induction. The Indexed constraint learners indeed lead to better fit to the training data compared to the control. The resulting grammars are tested on a different schwa deletion dataset from Smith & Pater (2020). It is shown that indexed constraints do not lead to a drop in generalization to these data, and one of the indexation learners produces a grammar that predicts Smith & Pater’s data quite closely. We conclude that indexed constraints do not necessarily hurt a grammar’s ability to generalize to unseen data, while allowing the grammar to achieve a closer fit to training data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.