Abstract

In this paper we attempt to model variation in Taiwan Southern Min syllable contraction using the Gradual Learning Algorithm (GLA; Boersma and Hayes 2001), an Optimality-Theoretic model with variable constraint ranking. To explore the effectiveness of GLA, we look at three data sets of increasing complexity: non-variable fully contracted forms as analyzed by Hsu (2003), variable outputs as noted by Hsu and confirmed by other native speakers, and phonetically variable outputs collected in a speech production experiment by Li (2005). The results reveal that GLA is capable of providing plausible constraint ranking hierarchies that capture both major generalizations and variability. Stochastic constraint evaluation thus seems to be a promising mechanism in the construction of grammars.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call