The current study examines the neural mechanisms underlying facial recognition, focusing on how emotional expression and mouth display modulate event-related potential (ERP) waveforms. 42 participants categorized faces by gender in one of two experimental setups: one featuring full-face images and another with cropped faces presented against neutral gray backgrounds. The stimuli included 288 images balanced across gender, race/ethnicity, emotional expression (“Fearful”, “Happy”, “Neutral”), and mouth display (“closed mouth” vs. “open mouth with exposed teeth”). Results revealed that N170 amplitude was significantly greater for open-mouth (exposed teeth) conditions (p < 0.01), independent of emotional expression, and no interaction between emotional expression and mouth display was found. However, the P100 amplitude exhibited a significant interaction between these variables (p < 0.05). Monte Carlo simulations analyzing N170 latency differences showed that fearful faces elicited a faster response than happy and neutral faces, with a 2 ms delay unlikely to occur by chance (p < 0.01). While these findings challenge prior research suggesting that N170 is directly influenced by emotional expression, they also highlight the potential role of emotional intensity as an alternative explanation. This underscores the importance of further studies to disentangle these effects. This study highlights the critical need to control for mouth display when investigating emotional face processing. The results not only refine our understanding of the neural dynamics of face perception but also confirm that the brain processes fearful expressions more rapidly than happy or neutral ones. These insights offer valuable methodological considerations for future neuroimaging research on emotion perception.
Read full abstract