Abstract

The cross-linguistic tendency for contrast shifts to occur between some cues more than others has been investigated typologically and experimentally (Yang 2019), but with less attention in computational modeling. This paper adapts a human experimental paradigm (Kingston et al. 2008) to the speech perception component of a neural network model of sound change (Beguš 2020) to better understand how it processes acoustic cues in the context of Yang’s proposal that auditory dimensions affect which cues are more likely to undergo contrast shift. Piloting this neural network probing technique, I find evidence that the model integrates different pairs of English stop voicing cues than humans do, suggesting that further amendments to the model are necessary to implement Yang (2019)’s account. In general, these results highlight potential acoustic processing differences between humans and the model under investigation, a Convolutional Neural Network, which is commonly used in spoken language applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call