Abstract

Visual mismatch negativity (vMMN), a component in event-related potentials (ERPs), can be elicited when rarely presented “deviant” facial expressions violate regularity formed by repeated “standard” faces. vMMN is observed as differential ERPs elicited between the deviant and standard faces. It is not clear, however, whether differential ERPs to rare emotional faces interspersed with repeated neutral ones reflect true vMMN (i.e., detection of regularity violation) or merely encoding of the emotional content in the faces. Furthermore, a face-sensitive N170 response, which reflects structural encoding of facial features, can be modulated by emotional expressions. Owing to its similar latency and scalp topography with vMMN, these two components are difficult to separate. We recorded ERPs to neutral, fearful, and happy faces in two different stimulus presentation conditions in adult humans. For the oddball condition group, frequently presented neutral expressions (p = 0.8) were rarely replaced by happy or fearful expressions (p = 0.1), whereas for the equiprobable condition group, fearful, happy, and neutral expressions were presented with equal probability (p = 0.33). Independent component analysis (ICA) revealed two prominent components in both stimulus conditions in the relevant latency range and scalp location. A component peaking at 130 ms post stimulus showed a difference in scalp topography between the oddball (bilateral) and the equiprobable (right-dominant) conditions. The other component, peaking at 170 ms post stimulus, showed no difference between the conditions. The bilateral component at the 130-ms latency in the oddball condition conforms to vMMN. Moreover, it was distinct from N170 which was modulated by the emotional expression only. The present results suggest that future studies on vMMN to facial expressions should take into account possible confounding effects caused by the differential processing of the emotional expressions as such.

Highlights

  • Other people’s facial expressions convey socially important information about other individuals’ emotions and social intentions (Keltner et al, 2003)

  • The present results suggest that future studies on Visual mismatch negativity (vMMN) to facial expressions should take into account possible confounding effects caused by the differential processing of the emotional expressions as such

  • An event-related potentials (ERPs) component called visual mismatch negativity is a feasible method to study automatic encoding of several types of visual stimuli including faces. vMMN is elicited to rare stimuli (“deviant”) interspersed with repeated (“standard”) stimuli and observed as a differential ERP response between these two. vMMN can be observed in conditions where the participants are instructed to ignore the visual stimuli eliciting the vMMN and attend to other visual stimuli (e.g., Stefanics et al, 2012) or auditory stimuli (e.g., Astikainen and Hietanen, 2009)

Read more

Summary

Introduction

Other people’s facial expressions convey socially important information about other individuals’ emotions and social intentions (Keltner et al, 2003). VMMN is considered to reflect a process of detecting a mismatch between the representation of the repeated standard stimulus in transient memory and the current sensory input (Czigler et al, 2002; Astikainen et al, 2008; Kimura et al, 2009) to auditory MMN (for the trace-mismatch explanation of MMN, see Näätänen, 1990). The standard stimuli can be Frontiers in Human Neuroscience www.frontiersin.org vMMN to facial expressions physically variant, but if they form sequential regularity, deviant stimuli violating this regularity elicit vMMN (Astikainen and Hietanen, 2009; Kimura et al, 2010, 2011; Stefanics et al, 2011, 2012; for a review see Kimura, 2012). VMMN elicitation has recently been linked to the predictive coding theories (Friston, 2005), which postulate a predictive error between the neural model based on the representations of visual objects in memory and the actual perceptual input (Winkler and Czigler, 2012)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call