Abstract

Magneto-encephalography (MEG) was used to examine the cerebral response to affective non-verbal vocalizations (ANVs) at the single-subject level. Stimuli consisted of non-verbal affect bursts from the Montreal Affective Voices morphed to parametrically vary acoustical structure and perceived emotional properties. Scalp magnetic fields were recorded in three participants while they performed a 3-alternative forced choice emotion categorization task (Anger, Fear, Pleasure). Each participant performed more than 6000 trials to allow single-subject level statistical analyses using a new toolbox which implements the general linear model (GLM) on stimulus-specific responses (LIMO-EEG). For each participant we estimated “simple” models [including just one affective regressor (Arousal or Valence)] as well as “combined” models (including acoustical regressors). Results from the “simple” models revealed in every participant the significant early effects (as early as ~100 ms after onset) of Valence and Arousal already reported at the group-level in previous work. However, the “combined” models showed that few effects of Arousal remained after removing the acoustically-explained variance, whereas significant effects of Valence remained especially at late stages. This study demonstrates (i) that single-subject analyses replicate the results observed at early stages by group-level studies and (ii) the feasibility of GLM-based analysis of MEG data. It also suggests that early modulation of MEG amplitude by affective stimuli partly reflects their acoustical properties.

Highlights

  • Accurate recognition and interpretation of emotional states is crucial for social interaction

  • Correlation between principal components 1 (PC1) and 2 (PC2) and the six acoustical features entered in the principal component analysis (PCA)—mean and SD f0, harmonic-to-noise ratio (HNR), % of unvoiced frame, jitter and shimmer

  • IMAGING RESULTS Event related fields analysis Figure 2A shows scalp topographies corresponding to the three main components (N100, P200, Late Positive Potential (LPP)) elicited by auditory stimuli in each subject

Read more

Summary

Introduction

Accurate recognition and interpretation of emotional states is crucial for social interaction. Humans communicate their feelings by verbal or non-verbal means such as body gestures, facial expressions, or affective non-verbal vocalizations (ANVs). In addition to gender, age and other attributes, voices convey information about the speaker’s emotional state (Belin et al, 2004, 2011; Schirmer and Kotz, 2006). Electro-encephalography (EEG) studies found evoked response differences between affective and neutral vocalizations as early as 100 ms (Jessen and Kotz, 2011; Liu et al, 2012) or 150 ms (Sauter and Eimer, 2010) after stimulus onset. Emotionally intense stimuli are generally associated with a larger Late Positive Potential (LPP) component (∼400–600 ms) over the centro-parietal sensors (Keil et al, 2002; Schupp et al, 2006; Kanske and Kotz, 2007; Flaisch et al, 2008; Herbert et al, 2008; Olofsson et al, 2008; Pastor et al, 2008; Paulmann and Kotz, 2008a; Liu et al, 2012)

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.