Abstract

When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.

Highlights

  • Cooper (1974) first demonstrated that eye movements are directed towards the objects to which individual words refer in an accompanying visual display, even as those words unfold in time; participants were more likely to fixate a picture of a dog when hearing part or all of the word ‘dog’ than to fixate unrelated pictures

  • Whilst this observation has received considerable attention in recent years (e.g. Allopenna, Magnuson, & Tanenhaus 1998; Dahan, Magnuson, & Tanenhaus et al, 2001), there is a second observation which has not: Cooper observed that participants were more likely to fixate a picture showing a sailboat when hearing the semantically related word ‘lake’, and that 53% of these looks were initiated during the word itself (57% of looks to the dog were initiated during ‘dog’)

  • Using a visual search paradigm, Moores, Laiti, and Chelazzi (2003; Experiment 5) found more looks towards a lock than towards other objects when participants were given the word ‘key’ as the search target. These observations suggest a visual equivalent of semantic priming (Meyer & Schvaneveldt, 1971)

Read more

Summary

Introduction

Cooper (1974) first demonstrated that eye movements are directed towards the objects to which individual words refer in an accompanying visual display, even as those words unfold in time; participants were more likely to fixate a picture of a dog when hearing part or all of the word ‘dog’ than to fixate unrelated pictures Whilst this observation has received considerable attention in recent years Using a visual search paradigm, Moores, Laiti, and Chelazzi (2003; Experiment 5) found more looks towards a lock than towards other objects when participants were given the (visual) word ‘key’ as the search target These observations suggest a visual equivalent of semantic priming (Meyer & Schvaneveldt, 1971). Will we observe increased looks towards an object (e.g. a trumpet) that is related only by category to the target word (‘piano’)?

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call