Abstract

Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.

Highlights

  • When exploring our surroundings, we preferentially allocate attention to other human beings

  • A direct comparison between real human and artificial faces in the video subset including both face types showed a higher influence of real human faces (β = 0.289, 95% Confidence Intervals (CIs) [0.285,0.292]) than artificial faces (β = 0.156, 95% CI [0.153,0.159]) on fixation selection while both predictors contributed significantly to gaze allocation

  • We examined whether this attentional bias persists for various face types or whether the presence of real human and artificial faces differentially impacts gaze allocation when viewing videos of complex, naturalistic scenes

Read more

Summary

Introduction

We preferentially allocate attention to other human beings. Various eye-tracking studies have shown that our strong tendency to fixate others is apparent both when viewing images or videos in laboratory settings (Itier et al, 2007; Birmingham and Kingstone, 2009; Cerf et al, 2009; Kingstone, 2009; Bindemann et al, 2010; Coutrot and Guyader, 2014; Xu et al, 2014; Nasiopoulos et al, 2015; End and Gamer, 2017; Flechsenhar and Gamer, 2017; Rösler et al, 2017) and, to a slightly reduced extent, in real-life social interactions (Foulsham et al, 2011; Laidlaw et al, 2011; Freeth et al, 2013). How does the processing of these artificial faces differ from the processing of real faces? Mimicry and gesture of cartoon figures or statues convey information about their alleged emotions or internal states and were even seen to yield higher accuracies in emotion detection than real faces (Kendall et al, 2016)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call