Abstract

The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.

Highlights

  • In many situations we are faced with information arriving simultaneously through visual and speech channels

  • The data of Experiment 1 suggest that low literates, unlike highly literate participants, do not use phonological information when matching spoken words with concurrent visual objects

  • We conducted another experiment to investigate whether low literates can ever use phonological information to guide visual orienting

Read more

Summary

Introduction

In many situations we are faced with information arriving simultaneously through visual and speech channels. Cooper’s participants listened to short narratives while their eye movements were monitored on an array of spatially distinct line drawings of common objects, some of which were referred to in the spoken sentences. He observed that his participants, for instance when listening to a story about a safari in Africa, very quickly shifted their eye gaze to objects which were referred to, often already during the acoustic duration of the respective word (e.g., half way during the acoustic unfolding of the word “lion” participants started to shift their eye gaze to the drawing of the lion).

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call