Abstract

Theories of speech production aim to explain how talkers express abstract linguistic forms as audible events that are intelligible to both speaker and listener. The relationship among planned units of speech, their articulatory implementation, and their acoustic consequences is thus a key issue in speech research. The work reported here is part of a larger project designed to investigate the effects of visual acoustic and visual articulatory feedback on second language (L2) learners’ production and perception of non-native speech sounds. L2 talkers from a variety of language backgrounds practiced producing an English vowel, /æ/, while receiving visual feedback on either (1) first and second formant frequencies, provided by a real-time spectrographic display, or (2) tongue back position, shown using a talker-driven tongue avatar. Kinematic data were recorded using an electromagnetic articulograph (EMA) system that tracked tongue midline and lateral movement during vowel productions. Pronunciation accuracy was analyzed by calculating acoustic and kinematic Mahalanobis distances between L2 productions and target (native talker) exemplars. Initial analyses of a single subject’s data showed that both types of visual feedback training improved pronunciation, suggesting that both acoustic and articulatory information are recruited during vowel production.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call