Abstract

State-of-the-art learning mechanisms for stress in Optimality Theory (see, e.g., Tesar and Smolensky 2000; Boersma and Pater 2016; Jarosz 2013) make use of probabilistic mechanisms that are domain-general in that they do not refer to the content of constraints and must not be in UG. By contrast, Pearl (2007, 2011) has argued that domain-general probabilistic learners of parametric grammars (Yang 2002) are insufficient for word stress, and, instead, domain-general learning mechanisms must be stipulated in UG alongside the parameters themselves. We propose a modification of Yang’s (2002) learner based on Jarosz’s (2015) learner for Optimality Theory: the Expectation Driven Parameter Learner, and show that this modification yields a dramatic improvement in accuracy (from 4.3% to 96%) on a representative typology generated by Dresher and Kaye’s (1990) parameter set. This suggests that domain-general learning mechanisms may be sufficient for learning stress after all, contra Pearl (2007, 2011), regardless of which grammatical representation (parameters or violable constraints) is a better reflection of the human language capacity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call