Abstract

ABSTRACT People frequently regulate their own behaviour in an effort to be socially appropriate. Here we ask how self-monitoring influences our accuracy when reading others’ facial expressions. We used webcams and pre-programmed conversations to induce self-monitoring or other-monitoring in participants, before they classified the affective facial expressions of video-recorded actors. Two experiments showed that self-monitoring reduces sensitivity to affective facial expression in others. Experiment 1 showed that self-monitoring participants were less sensitive to emotional facial expressions than other-monitoring and neutral condition participants. Experiment 2 found the same result, but only in participants who rated the pre-programmed conversations as high in believability. We discuss possible mechanisms by which this may occur, including the role of social stress, divided attention, and automatic latent imitation when processing others’ facial expressions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.