Abstract

Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

Highlights

  • Natural conversation is a multimodal process, where the visual information contained in a speaker’s face plays an important role in decoding the speech signal

  • Each participant showed a rapid increase in accuracy during the visual feedback phase, ranging from 50 to 100% (x = 74.9%, sd = 15.6)

  • The results of kinematic analyses indicate that realtime visual feedback resulted in improved accuracy of consonant place of articulation

Read more

Summary

INTRODUCTION

Natural conversation is a multimodal process, where the visual information contained in a speaker’s face plays an important role in decoding the speech signal. The results indicated improved acquisition and maintenance by the participants who received traditional instruction plus EMA training These findings suggest that visual information regarding consonant place of articulation can assist second language learners with accent reduction. In another recent study, Suemitsu et al (2013) tested a 2D EMA-based articulatory feedback approach to facilitate production of an unfamiliar English vowel (/æ/) by five native speakers of Japanese. In the present experiment we investigated the accuracy with which healthy monolingual talkers could produce a novel, nonEnglish, speech sound (articulated by placing the tongue blade at the palatal region of the oral cavity) and whether this gesture could benefit from short-term articulatory training with visual feedback

MATERIALS AND METHODS
Participants and Stimuli
Kinematic Results
Acoustic Results
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call