Abstract

The visual mismatch negativity (vMMN), deriving from the brain's response to stimulus deviance, is thought to be generated by the cortex that represents the stimulus. The vMMN response to visual speech stimuli was used in a study of the lateralization of visual speech processing. Previous research suggested that the right posterior temporal cortex has specialization for processing simple non-speech face gestures, and the left posterior temporal cortex has specialization for processing visual speech gestures. Here, visual speech consonant-vowel (CV) stimuli with controlled perceptual dissimilarities were presented in an electroencephalography (EEG) vMMN paradigm. The vMMNs were obtained using the comparison of event-related potentials (ERPs) for separate CVs in their roles as deviant vs. their roles as standard. Four separate vMMN contrasts were tested, two with the perceptually far deviants (i.e., “zha” or “fa”) and two with the near deviants (i.e., “zha” or “ta”). Only far deviants evoked the vMMN response over the left posterior temporal cortex. All four deviants evoked vMMNs over the right posterior temporal cortex. The results are interpreted as evidence that the left posterior temporal cortex represents speech contrasts that are perceived as different consonants, and the right posterior temporal cortex represents face gestures that may not be perceived as different CVs.

Highlights

  • The visual mismatch negativity paradigm was used here to investigate visual speech processing

  • The visual mismatch negativity (vMMN) was confirmed (Pazo-Alvarez et al, 2003; Czigler, 2007; Kimura et al, 2011; Winkler and Czigler, 2012). It too is elicited by a change in regularities in a sequence of stimuli, across different levels of representation, including deviations caused by spatiotemporal visual features (Pazo-Alvarez et al, 2004), conjunctions of visual features (Winkler et al, 2005), emotional faces (Li et al, 2012; Stefanics et al, 2012), and abstract visual stimulus properties such as bilateral symmetry (Kecskes-Kovacs et al, 2013) and sequential visual stimulus probability (Stefanics et al, 2011)

  • Discrimination d scores were compared across groups using analysis of variance (ANOVA) with the within-subjects factor of stimulus distance and between-subjects factor of group

Read more

Summary

Introduction

The visual mismatch negativity (vMMN) paradigm was used here to investigate visual speech processing. The classical auditory MMN is generated by the brain’s automatic response to a change in repeated stimulation that exceeds a threshold corresponding approximately to the behavioral discrimination threshold. It is elicited by violations of regularities in a sequence of stimuli, whether the stimuli are attended or not, and the response typically peaks 100–200 ms after onset of the deviance (Näätänen et al, 1978, 2005, 2007). It too is elicited by a change in regularities in a sequence of stimuli, across different levels of representation, including deviations caused by spatiotemporal visual features (Pazo-Alvarez et al, 2004), conjunctions of visual features (Winkler et al, 2005), emotional faces (Li et al, 2012; Stefanics et al, 2012), and abstract visual stimulus properties such as bilateral symmetry (Kecskes-Kovacs et al, 2013) and sequential visual stimulus probability (Stefanics et al, 2011)

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.