Abstract

Our visual system extracts the emotional meaning of human facial expressions rapidly and automatically. Novel paradigms using fast periodic stimulations have provided insights into the electrophysiological processes underlying emotional content extraction: the regular occurrence of specific identities and/or emotional expressions alone can drive diagnostic brain responses. Consistent with a processing advantage for social cues of threat, we expected angry facial expressions to drive larger responses than neutral expressions. In a series of four EEG experiments, we studied the potential boundary conditions of such an effect: (i) we piloted emotional cue extraction using 9 facial identities and a fast presentation rate of 15 Hz (N = 16); (ii) we reduced the facial identities from 9 to 2, to assess whether (low or high) variability across emotional expressions would modulate brain responses (N = 16); (iii) we slowed the presentation rate from 15 Hz to 6 Hz (N = 31), the optimal presentation rate for facial feature extraction; (iv) we tested whether passive viewing instead of a concurrent task at fixation would play a role (N = 30). We consistently observed neural responses reflecting the rate of regularly presented emotional expressions (5 Hz and 2 Hz at presentation rates of 15 Hz and 6 Hz, respectively). Intriguingly, neutral expressions consistently produced stronger responses than angry expressions, contrary to the predicted processing advantage for threat-related stimuli. Our findings highlight the influence of physical differences across facial identities and emotional expressions.

Highlights

  • ObjectivesWe examined: (i) the ratio of effective numbers of samples—i.e., the effective number of samples divided by the total number of samples—, which we aimed to keep larger than 0.1 to avoid excessive dependency between samples; (ii) the Gelman-Rubin R^ statistic [49]—comparing the between-chains variability to the within-chain variability [50]—which, as a rule of thumb, should not be larger than 1.05 or chains may not have successfully converged; (iii) the Monte Carlo standard error (MCSE)—the standard deviation of the chains divided by their effective sample size—a measure of sampling noise [51]

  • The human brain is capable of rapidly processing differences in facial expressions and identifying those that signal threat, presumably due to the survival advantage of such an ability [1,2]

  • Irrespective of face orientation, regular conditions elicited larger state visual evoked potential (SSVEP) relative to irregular presentations, indicating that our stimulation protocol produced the intended regularity-driven SSVEPs. This is further demonstrated by the prominent parieto-occipital topographies of SSVEP maxima in the regular conditions that are absent in the irregular conditions

Read more

Summary

Objectives

We examined: (i) the ratio of effective numbers of samples—i.e., the effective number of samples divided by the total number of samples—, which we aimed to keep larger than 0.1 to avoid excessive dependency between samples; (ii) the Gelman-Rubin R^ statistic [49]—comparing the between-chains variability to the within-chain variability [50]—which, as a rule of thumb, should not be larger than 1.05 or chains may not have successfully converged; (iii) the Monte Carlo standard error (MCSE)—the standard deviation of the chains divided by their effective sample size—a measure of sampling noise [51]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.