Abstract

The interaction of audio–visual signals transferring information about the emotional state of others may play a significant role in social engagement. There is ample evidence that recognition of visual emotional information does not necessarily depend on conscious processing. However, little is known about how multisensory integration of affective signals relates to visual awareness. Previous research using masking experiments has shown relative independence of audio–visual integration on visual awareness. However, masking does not capture the dynamic nature of consciousness in which dynamic stimulus selection depends on a multitude of signals. Therefore, we presented neutral and happy faces in one eye and houses in the other resulting in perceptual rivalry between the two stimuli while at the same time we presented laughing, coughing or no sound. The participants were asked to report when they saw the faces, houses or their mixtures and were instructed to ignore the playback of sounds. When happy facial expressions were shown participants reported seeing fewer houses in comparison to when neutral expressions were shown. In addition, human sounds increase the viewing time of faces in comparison when there was no sound. Taken together, emotional expressions of the face affect which face is selected for visual awareness and at the same time, this is facilitated by human sounds.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.