Abstract

Top-down processing is a mechanism in which memory, context and expectation are used to perceive stimuli. For this study we investigated how emotion content, induced by music mood, influences perception of happy and sad emoticons. Using single pulse TMS we stimulated right occipital face area (rOFA), primary visual cortex (V1) and vertex while subjects performed a face-detection task and listened to happy and sad music. At baseline, incongruent audio-visual pairings decreased performance, demonstrating dependence of emotion while perceiving ambiguous faces. However, performance of face identification decreased during rOFA stimulation regardless of emotional content. No effects were found between Cz and V1 stimulation. These results suggest that while rOFA is important for processing faces regardless of emotion, V1 stimulation had no effect. Our findings suggest that early visual cortex activity may not integrate emotional auditory information with visual information during emotion top-down modulation of faces.

Highlights

  • When perceptual input is ambiguous observers may rely on contextual information in order to process what we see (Jolij and Meurs, 2011)

  • The arousal scale of the SelfAssessment Manakin (SAM) revealed higher arousal when listening to sad music compared to happy and no music condition (F = 8.556, p = 0.002)

  • For this study we investigated the functional role of early visual cortical areas involved in emotion dependent top-down modulation on the perception of facial expressions

Read more

Summary

Introduction

When perceptual input is ambiguous observers may rely on contextual information in order to process what we see (Jolij and Meurs, 2011). When subjects in a negative mood simultaneously view ambiguous faces, participants tend to judge ambiguous facial expressions as sad (Bouhuys et al, 1995; Niedenthal, 2007). Emotion laden influences on perception have been demonstrated in a face detection task while subjects listen to happy and sad music (Jolij and Meurs, 2011; Jolij et al, 2011). Jolij and Meurs (2011) were able to demonstrate that when subjects rated facial features of emoticons as happy or sad while passively listening to happy or sad music, participants became Emotion laden influences on perception have been demonstrated in a face detection task while subjects listen to happy and sad music (Jolij and Meurs, 2011; Jolij et al, 2011). Jolij and Meurs (2011) were able to demonstrate that when subjects rated facial features of emoticons as happy or sad while passively listening to happy or sad music, participants became

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.