Abstract

In addition to “nonverbal search” for objects, modern life also necessitates “verbal search” for written words in variable configurations. We know less about how we locate words in novel spatial arrangements, as occurs on websites and menus, than when words are located in passages. In this study we leveraged eye tracking technology to examine the hypothesis that objects are simultaneously screened in parallel while words can only be found when each are directly foveated in serial fashion. Participants were provided with a cue (e.g. rabbit) and tasked with finding a thematically-related target (e.g. carrot) embedded within an array including a dozen distractors. The cues and arrays were comprised of object pictures on nonverbal trials, and of written words on verbal trials. In keeping with the well-established “picture superiority effect,” picture targets were identified more rapidly than word targets. Eye movement analysis showed that picture superiority was promoted by parallel viewing of objects, while words were viewed serially. Different factors influenced performance in each stimulus modality; lexical characteristics such as word frequency modulated viewing times during verbal search, while taxonomic category affected viewing times during nonverbal search. In addition to within-platform task conditions, performance was examined in cross-platform conditions where picture cues were followed by word arrays, and vice versa. Although taxonomically-related words did not capture gaze on verbal trials, they were viewed disproportionately when preceded by cross-platform picture cues. Our findings suggest that verbal and nonverbal search are associated with qualitatively different search strategies and forms of distraction, and cross-platform search incorporates characteristics of both.

Highlights

  • Humans regularly engage in visual search to navigate our cluttered world

  • In order to quantify these patterns, saccades were classified into three types based on their start and end points: within-item saccades starting and ending in the same Areas of interest (AOIs), local saccades where the start and end points are in adjacent AOIs, and long-range saccades where the end point is in a non-adjacent AOI

  • Eye movements revealed some of the mechanisms underlying picture superiority in the context of visual search

Read more

Summary

Introduction

Humans regularly engage in visual search to navigate our cluttered world. We engage in “nonverbal search” for objects throughout the day, for example by picking out clothes in the morning, finding a parking space at work, or gathering ingredients to prepare dinner. A typical day may involve finding stories to read on a newspaper or website, emails within our inbox, songs within a playlist, or items on a restaurant menu. This type of verbal search differs from stereotyped environment of traditional text reading, in which your eyes scan from left to right along each line of text, from the top to the bottom of a page, as Verbal and Nonverbal Search sentences and paragraphs are scanned in sequential order (Rayner, 1998; Rayner, 2009). Far less is known about how we locate word targets in these novel spatial arrangements

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call