Abstract
Previous research has shown that low-level visual features (i.e., low-level visual saliency) as well as socially relevant information predict gaze allocation in free viewing conditions. However, these studies mainly used static and highly controlled stimulus material, thus revealing little about the robustness of attentional processes across diverging situations. Secondly, the influence of affective stimulus characteristics on visual exploration patterns remains poorly understood. Participants in the present study freely viewed a set of naturalistic, contextually rich video clips from a variety of settings that were capable of eliciting different moods. Using recordings of eye movements, we quantified to what degree social information, emotional valence and low-level visual features influenced gaze allocation using generalized linear mixed models. We found substantial and similarly large regression weights for low-level saliency and social information, affirming the importance of both predictor classes under ecologically more valid dynamic stimulation conditions. Differences in predictor strength between individuals were large and highly stable across videos. Additionally, low-level saliency was less important for fixation selection in videos containing persons than in videos not containing persons, and less important for videos perceived as negative. We discuss the generalizability of these findings and the feasibility of applying this research paradigm to patient groups.
Highlights
Like most vertebrates, humans can only obtain a part of their visual field at a high acuity and repeatedly move their eyes in order to construct a representation of their environment with sufficiently high resolution[1]
As motion is ubiquitously present in virtually all everyday situations and has been shown to be the strongest single predictor for gaze allocation[17,29], video stimuli seem advantageous when investigating social attention compared to static stimuli
In order to confirm the expected modulation of autonomic responses by differences in perceived valence as well as the presence of persons, we first examined the influence of valence and presence of persons on arousal ratings and autonomic measures using 2 × 3 repeated measures ANOVAs with video category and emotional valence ratings as within-subject factors
Summary
Humans can only obtain a part of their visual field at a high acuity and repeatedly move their eyes in order to construct a representation of their environment with sufficiently high resolution[1]. It was shown that socially relevant features like human heads and eyes[4,5], gaze direction of depicted people[6], people who are talking[7] and people with high social status[8] attract attention when freely viewing images or dynamic scenes. Non-social cues like text[9,10] and the center of the screen[11,12,13] can serve as predictors for gaze behavior Another line of research has focused on the predictive value of low-level image features such as contrast, color, edge density and, for dynamic scenes, motion. It was demonstrated that participants show more consistent eye movement patterns when viewing videos compared to static images[30,31], indicating a potentially higher predictive value of basic stimulus properties on visual exploration
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have