Abstract

Previous studies have shown that complex visual stimuli, such as emotional facial expressions, can influence brain activity independently of the observers’ awareness. Little is known yet, however, about the “informational correlates” of consciousness – i.e., which low-level information correlates with brain activation during conscious vs. non-conscious perception. Here, we investigated this question in the spatial frequency (SF) domain. We examined which SFs in disgusted and fearful faces modulate activation in the insula and amygdala over time and as a function of awareness, using a combination of intracranial event-related potentials (ERPs), SF Bubbles (Willenbockel et al., 2010a), and Continuous Flash Suppression (CFS; Tsuchiya and Koch, 2005). Patients implanted with electrodes for epilepsy monitoring viewed face photographs (13° × 7°) that were randomly SF filtered on a trial-by-trial basis. In the conscious condition, the faces were visible; in the non-conscious condition, they were rendered invisible using CFS. The data were analyzed by performing multiple linear regressions on the SF filters from each trial and the transformed ERP amplitudes across time. The resulting classification images suggest that many SFs are involved in the conscious and non-conscious perception of emotional expressions, with SFs between 6 and 10 cycles per face width being particularly important early on. The results also revealed qualitative differences between the awareness conditions for both regions. Non-conscious processing relied on low SFs more and was faster than conscious processing. Overall, our findings are consistent with the idea that different pathways are employed for the processing of emotional stimuli under different degrees of awareness. The present study represents a first step to mapping how SF information “flows” through the emotion-processing network with a high temporal resolution and to shedding light on the informational correlates of consciousness in general.

Highlights

  • The look on someone’s face can speak volumes

  • BEHAVIORAL RESULTS The detection task served two purposes: (a) to ensure that participants stayed alert during the experiment, and (b) to check on each Continuous Flash Suppression (CFS) trial whether the face broke through the suppression noise

  • SPATIAL FREQUENCY RESULTS Figure 5 depicts the significant pixels for each spatial frequency (SF) and time bin, up to 1.5 s after stimulus onset for the overall insula and amygdala CIs

Read more

Summary

Introduction

The look on someone’s face can speak volumes. Emotional facial expressions convey a wealth of information, such as cues about a person’s state of mind or warning signs of potentially threatening situations (e.g., reflected by fear) or materials (e.g., reflected by disgust). Human faces and brains are thought to have co-evolved to be efficient transmitters and decoders of emotional signals, respectively (Smith et al, 2005; Schyns et al, 2007, 2009). Numerous studies have shown that face stimuli rendered “invisible” using techniques such as backward masking (e.g., Smith, in press), binocular rivalry (e.g., Williams et al, 2004), or Continuous Flash Suppression (CFS; e.g., Tsuchiya and Koch, 2005; Jiang and He, 2006; Jiang et al, 2009) can be processed sufficiently for the healthy brain to distinguish neutral from emotional expressions, including fear, disgust, and happiness. It is widely thought that facial expressions can influence neural activity and behavior independently of awareness, and that they constitute a stimulus class well suited for investigating differences between conscious and non-conscious perception in the human brain

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.