Abstract

Spatial frequency (SF) contents have been shown to play an important role in emotion perception. This study employed event-related potentials (ERPs) to explore the time course of neural dynamics involved in the processing of facial expression conveying specific SF information. Participants completed a dual-target rapid serial visual presentation (RSVP) task, in which SF-filtered happy, fearful, and neutral faces were presented. The face-sensitive N170 component distinguished emotional (happy and fearful) faces from neutral faces in a low spatial frequency (LSF) condition, while only happy faces were distinguished from neutral faces in a high spatial frequency (HSF) condition. The later P3 component differentiated between the three types of emotional faces in both LSF and HSF conditions. Furthermore, LSF information elicited larger P1 amplitudes than did HSF information, while HSF information elicited larger N170 and P3 amplitudes than did LSF information. Taken together, these results suggest that emotion perception is selectively tuned to distinctive SF contents at different temporal processing stages.

Highlights

  • Throughout evolution, humans have developed the ability to detect and respond to certain challenges and opportunities[1,2], especially under the condition of limited attentional resources

  • Pairwise comparison of the main effect of facial expression showed that the accuracies of fearful (96.0 ± 0.6%, p < 0.001) and happy faces (95.1 ± 0.8%, p = 0.008) were higher than that of neutral faces (90.5 ± 1.2%), while the former two emotion conditions show no significant difference (p = 0.750)

  • We find that happy face categorization was less impacted in the absence of certain Spatial frequency (SF) channels, which could explain why happy faces are recognizable over a variety of different viewing distances, when there is either low spatial frequency (LSF) or high spatial frequency (HSF) information available[18,51]

Read more

Summary

Introduction

Throughout evolution, humans have developed the ability to detect and respond to certain challenges and opportunities[1,2], especially under the condition of limited attentional resources (i.e., in an emergency situation). The dual-route model of emotion processing suggested that there are two parallel routes for the processing of emotional information: a subcortical “low road” that provides fast, but crude, biologically significant signals to the amygdala, and a longer, slower “high road” that processes detailed information through cortical visual areas[2,12,13] In support of this model, Vuilleumier, et al.[14] found larger amygdala and subcortical (pulvinar and superior colliculus) activation for LSF, but not for HSF information in fearful expression perception, suggesting a functional role for the subcortical pathway in providing coarse and threat-related signals. There is an emerging view that the processing of affective visual stimuli relies on both LSF and HSF information, and that the dual-route model needs to be revised to a more flexible model - the multiple-waves model[17] This model postulates that multiple cortical regions, as well as subcortical structures, play a prominent role in the processing of ecologically relevant signals[17,18,19,20]. The late P3 component is sensitive to stimulus valence, reflecting a more elaborate processing of emotional information[27]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call