Abstract

Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one’s own face to assimilate another person’s face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer’s motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant’s own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other’s face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

Highlights

  • The mental self-face representation comprises an important part of both self-identity and the mental representation of one’s own body [1]

  • Though one might expect enfacement to be reflected in a difference between the frame that participants chose after synchronous and asynchronous interpersonal multisensory stimulation (IMS), the crucial comparisons are between the post-IMS morph video judgment and the corresponding pre-IMS baseline judgment made in a single experimental session

  • We hypothesized that seeing a fearful face being touched in synchrony with one’s own face would increase enfacement because of greater motivation to understand the affective state of the other person, a process that would involve enhanced somatosensory resonance

Read more

Summary

Introduction

The mental self-face representation comprises an important part of both self-identity and the mental representation of one’s own body [1]. While the studies described above focused on visual self-recognition, more recent accounts of self-recognition have attempted to explain how the mental self-representation, including the self-face, is constructed and updated over time via the convergence of multimodal inputs [7] This line of research has revealed that multisensory information can update the self-face representation, and, under certain circumstances, may blur the distinction between self and other. Concurrent visual and tactile inputs update the mental representation of the self-face, causing the subjects to accept more of the other person’s facial features as their own [8,10] This effect has been replicated using additional measures of self/other merging, including the distance the participant chooses between two circles representing “self” and “other” (a variant of the Inclusion of the Other in the Self scale [12]) and a questionnaire assessing the subjective experience of the enfacement illusion adapted from the rubber hand illusion questionnaire [9,10,11]. A recent study demonstrated that viewing a person from a different ethnic background being touched in synchrony with touch on one’s own face can improve somatosensory resonance with the outgroup member [13]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call