Abstract

Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

Highlights

  • Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences

  • Their eye movements were recorded, and it was found that at remarkably short latencies, eye gaze was directed to those objects that were mentioned in the spoken sentences or that were associated with the content of the narrative

  • Our successful conceptual replication of the original study indicates that the previous findings do generalize to richer situations of stereoscopic 3-D vision

Read more

Summary

Introduction

Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. Participants listened to short stories while looking at a visual display that depicted several objects Their eye movements were recorded, and it was found that at remarkably short latencies, eye gaze was directed to those objects that were mentioned in the spoken sentences or that were associated with the content of the narrative. These findings led to the conclusion that eyetracking is a useful tool Bfor real-time investigation of perceptual and cognitive processes and, in particular, for the detailed study of speech perception, memory and language. The verb could relate to all presented objects (e.g., The boy will move the cake, paired with a scene in which all the depicted objects are moveable)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call