Abstract

In the current study, we explored the time course of processing other’s pain under induced happy or sad moods. Event-related potentials (ERPs) were recorded when participants observing pictures showing others in painful or non-painful situations. Mood induction procedures were applied to the participants before the picture observation task. Happy and sad moods were induced by listening to about 10 minutes of music excerpts selected from the Chinese Affective Music System (CAMS). The ERP results revealed that the induced mood can influence the early automatic components N1, P2, and N2 but not the later top-down controlled components P3 and LPP. The difference of amplitudes elicited by painful and non-painful stimuli was significantly different only in a sad mood but not in a happy mood, which indicates that comparing to a sad mood, the participants’ ability to discriminate the painful stimuli from the non-painful stimuli was weakened in a happy mood. However, this reduction of sensitivity to other’s pain in a happy mood does not necessarily reduce the tendency of prosocial behaviors. These findings offer psychophysiological evidences that people’s moods can influence their empathic response towards other’s pain.

Highlights

  • Empathy is defined as the ability to vicariously share the affective states of others[1,2]

  • Half of the participants listened to musical excerpts prepared to induce the happy mood first and the other half of the participants listened to musical excerpts prepared to induce the sad mood first

  • We found significant interactions of Mood × Picture in components N1, P2, and N2 such as the painful pictures elicited significantly more negative amplitudes than the non-painful pictures only under the induced sad mood but not under the induced happy mood

Read more

Summary

Introduction

Empathy is defined as the ability to vicariously share the affective states of others[1,2]. Repeated measures ANOVA (2 (Mood: Happy/Sad) × 2 (Picture: Painful/ Non-Painful) × 5 Regions (frontal, central, centra-parietal, parietal, parieto-occipital) were performed for each component within its most pronounced time windows (N1 (100–160 ms), P2 (160–220 ms), N2 (200–300 ms), P3 (300–400 ms) and LPP (450–650 ms)).

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.