Abstract
Integrating the spatiotemporal information acquired from the highly dynamic world around us is essential to navigate, reason, and decide properly. Although this is particularly important in a face-to-face conversation, very little research to date has specifically examined the neural correlates of temporal integration in dynamic face perception. Here we present statistically robust observations regarding the brain activations measured via electroencephalography (EEG) that are specific to the temporal integration. To that end, we generate videos of neutral faces of individuals and non-face objects, modulate the contrast of the even and odd frames at two specific frequencies (f_1 and f_2) in an interlaced manner, and measure the steady-state visual evoked potential as participants view the videos. Then, we analyze the intermodulation components (IMs: (nf_1pm mf_2), a linear combination of the fundamentals with integer multipliers) that consequently reflect the nonlinear processing and indicate temporal integration by design. We show that electrodes around the medial temporal, inferior, and medial frontal areas respond strongly and selectively when viewing dynamic faces, which manifests the essential processes underlying our ability to perceive and understand our social world. The generation of IMs is only possible if even and odd frames are processed in succession and integrated temporally, therefore, the strong IMs in our frequency spectrum analysis show that the time between frames (1/60 s) is sufficient for temporal integration.
Highlights
Spectrum, we observe prominent fundamentals and harmonics and intermodulation components (IMs: nf1 ± mf2 ), which are designed to measure the temporal integration processing during dynamic face perception
A single frame from the sequence face is repeatedly shown in the static face condition which serves as a baseline as it does not include any dynamic information
We focus on the four nonlinear interactions about the intermingled spatial and temporal processes, and four nonlinear interactions that are specific to only temporal integration processes
Summary
Spectrum, we observe prominent fundamentals and harmonics and intermodulation components (IMs: nf1 ± mf2 ), which are designed to measure the temporal integration processing during dynamic face perception. We investigate temporal integration, which we define as the integration of the sequentially displayed face information, e.g., temporally separated successive frames in a displayed face video, into a unified representation of the spatiotemporal face input This unified representation can be considered as the base for higher-level processing to generate further meaningful representation such as lip reading or facial expression recognition. It is worth noting that the IM components have been previously established as an objective neural signature of integration processes in various perceptual phenomena occurring throughout the visual processing h ierarchy[8,9,29,30,31,32,33,34,35] In this context, we analyze the IM components to exclusively study the temporal integration in dynamic face perception which appears to be not explored as in depth as spatial processing in the literature
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.