Abstract
Current estimates of the relevant acoustic information in speech are typically inventory-based. They assume a set of phonemes or featural/gestural decompositions of phonemes exists in a language, and proceed to identify and rank/weight the set of acoustic parameters sufficient to distinguish a phonemic contrast based on data from instantiations of the sounds/categories comprising those contrasts in either a controlled set of real words in the language or in nonword syllables. Acknowledging that such contrasts ultimately derive from lexical oppositions, we present an alternative, complementary approach to the quantification of acoustic information in speech: one that is lexically based. Using acoustic data from the Massive Auditory Lexical Decision project (Tucker et al., 2018)—over 26,000 unique words produced by a single speaker of Western Canadian English—and listener identification data on a representative 240-word sample of that lexicon, we define a weighted network of the phonological lexicon (cf. Vitevitch, 2008), where the weights on links between minimal pairs correspond to the predicted acoustic similarity from a model fit to the listener error distributions. From this network the distributed, “global” information contributed by individual parameters operating in an ensemble of lexical oppositions can be estimated from changes in network entropy under perturbations of those parameters.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have