Sort by
Head and body cues guide eye movements and facilitate target search in real-world videos

Static gaze cues presented in central vision result in observer shifts of covert attention and eye movements, and benefits in perceptual performance in the detection of simple targets. Less is known about how dynamic gazer behaviors with head and body motion influence search eye movements and performance in perceptual tasks in real-world scenes. Participants searched for a target person (yes/no task, 50% presence), whereas watching videos of one to three gazers looking at a designated person (50% valid gaze cue, looking at the target). To assess the contributions of different body parts, we digitally erase parts of the gazers in the videos to create three different body parts/whole conditions for gazers: floating heads (only head movements), headless bodies (only lower body movements), and the baseline condition with intact head and body. We show that valid dynamic gaze cues guided participants' eye movements (up to 3 fixations) closer to the target, speeded the time to foveate the target, reduced fixations to the gazers, and improved target detection. The effect of gaze cues in guiding eye movements to the search target was the smallest when the gazer's head was removed from the videos. To assess the inherent information about gaze goal location for each body parts/whole condition, we collected perceptual judgments estimating gaze goals by a separate group of observers with unlimited time. Observers' perceptual judgments showed larger estimate errors when the gazer's head was removed. This suggests that the reduced eye movement guidance from lower body cueing is related to observers' difficulty extracting gaze information without the presence of the head. Together, the study extends previous work by evaluating the impact of dynamic gazer behaviors on search with videos of real-world cluttered scenes.

Open Access
Relevant
Deep learning on independent spatial EEG activity patterns delineates time windows relevant for response inhibition.

Inhibitory control processes are an important aspect of executive functions and goal-directed behavior. However, the mostly correlative nature of neurophysiological studies was not able to provide insights which aspects of neural dynamics can best predict whether an individual is confronted with a situation requiring the inhibition of a response. This is particularly the case when considering the complex spatio-temporal nature of neural processes captured by EEG data. In the current study, we ask whether independent spatial activity profiles in the EEG data are useful to predict whether an individual is confronted with a situation requiring response inhibition. We combine independent component analysis (ICA) with explainable artificial intelligence approaches (EEG-based deep learning) using data from a Go/Nogo task (N = 255 participants). We show that there are four dissociable spatial activity profiles important to classify Go and Nogo trials as revealed by deep learning. Of note, for all of these four independent activity profiles, neural activity in the time period between 300 and 550 ms after stimulus presentation was most informative. Source localization analyses further revealed regions in the pre-central gyrus (BA6), the middle frontal gyrus (BA10), the inferior frontal gyrus (BA46), and the insular cortex (BA13) were associated with the isolated spatial activity profiles. The data suggest concomitant processes being reflected in the identified time window. This has implications for the ongoing debate on the functional significance of event-related potential correlates of inhibitory control.

Open Access
Relevant