Abstract

Previous research on visualization of speech segments for pronunciation training suggests that such learning results in improved segmental production (e.g., Katrushina et al., 2015; Olson, 2014; Patten & Edmonds, 2015). However, investigation into real-time formant visualization for L2 vowel production training has been limited to either training a single or a pair of vowels (Carey, 2004; Sakai, 2016) and to examination of improvement on trained items only (Katrushina et al., 2015). This project investigates the effects of real-time formant visualization on production training for eight L2 vowels in trained and untrained environments as well as spontaneous speech. L2 learners (n = 11) participated in nine 30-minute training sessions, during which they used a formant visualization system to practice their vowel production. A control group (n = 8) was involved in audio-only vowel production training. Pre-test, post-test, and delayed post-test design was used, and pronunciation improvement was analyzed acoustically using Mahalanobis distance and mixed-effects modeling. The use of real-time visual acoustic feedback resulted in retained improvement in vowel quality in both trained and untrained items than audio-only training. Spontaneous speech was not improved. The findings suggest that this system could be used as an effective pedagogical tool for L2 learners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call