Abstract

Behavioral studies have shown that the ability to discriminate between non-native speech sounds improves after seeing how the sounds are articulated. This study examined the influence of visual articulatory information on the neural correlates of non-native speech sound discrimination. English speakers’ discrimination of the Hindi dental and retroflex sounds was measured using the mismatch negativity (MMN) event-related potential, before and after they completed one of three 8-min training conditions. In an audio-visual speech training condition (n = 14), each sound was presented with its corresponding visual articulation. In one control condition (n = 14), both sounds were presented with the same visual articulation, resulting in one congruent and one incongruent audio-visual pairing. In another control condition (n = 14), both sounds were presented with the same image of a still face. The control conditions aimed to rule out the possibility that the MMN is influenced by non-specific audio-visual pairings, or by general exposure to the dental and retroflex sounds over the course of the study. The results showed that audio-visual speech training reduced the latency of the MMN but did not affect MMN amplitude. No change in MMN amplitude or latency was observed for the two control conditions. The pattern of results suggests that a relatively short audio-visual speech training session (i.e., 8 min) may increase the speed with which the brain processes non-native speech sound contrasts. The absence of a training effect on MMN amplitude suggests a single session of audio-visual speech training does not lead to the formation of more discrete memory traces for non-native speech sounds. Longer and/or multiple sessions might be needed to influence the MMN amplitude.

Highlights

  • A well-known difficulty of learning a second language in adulthood is discriminating between non-native speech sounds (Aoyama et al, 2004)

  • To test the hypotheses forwarded in this study, planned comparisons comprising paired samples t-tests were used to examine pre- to post-test changes in mismatch negativity (MMN) amplitude and latency in each of the three conditions

  • This study provides evidence suggesting that AV speech training alone can influence the neural correlates of non-native speech sound discrimination

Read more

Summary

Introduction

A well-known difficulty of learning a second language in adulthood is discriminating between non-native speech sounds (Aoyama et al, 2004). Native English speakers can have difficulty discriminating the Hindi dental (e.g., /t/) and Articulatory Information and Non-native Speech retroflex (e.g., / /) sounds (Werker and Lalonde, 1988; Pruitt et al, 2006; MacLean and Ward, 2016) This is because English speakers perceive both the dental and retroflex sounds as mapping onto a single English phoneme (e.g., /t/) category. A number of behavioral studies have shown that the ability to discriminate between non-native speech sounds improves after seeing how the sounds are articulated (Hardison, 2003, 2005; Hazan et al, 2005; Hirata and Kelly, 2010; Llompart and Reinisch, 2017). The current study used electroencephalography, event-related potentials (ERPs), to examine the influence of visual articulatory information on non-native speech sound discrimination

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call