Abstract

Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.

Highlights

  • Bringing an object into the attentional foreground is the first step of most intentional actions that are directed at the outer world (Tatler & Land, 2016)

  • We provide a neural process account that integrates the three core components of visual orientation to objects in the environment: (1) Visual exploration that builds a scene working memory; (2) visual attention directed to locations of visual transients and extraction of the visual features at that location; and (3) visual search for matching objects

  • We presented a first neural process account of feature integration theory that avoids any element of information processing while modeling a complete visual search paradigm, including the detection of the search cue from visual transients, its commitment to feature memory, the autonomous generation of a sequence of attentional selection decisions, and the matching of the cued feature values and the feature values extracted at each attended location

Read more

Summary

Introduction

Bringing an object into the attentional foreground is the first step of most intentional actions that are directed at the outer world (Tatler & Land, 2016). Since Anne Treisman’s seminal work on feature integration theory (Treisman & Gelade, 1980), the question how visual search is guided by individual or combinations of feature dimensions has been a dominant theme of that research (Wolfe & Horowitz, 2017) It has been intensely studied how the amount of time needed to find a cued object scales with the number of distractor items, or with the metric differences between targets and distractors, and the findings have been used to diagnose the underlying process organization (Duncan & Humphrey, 1989; Friedman-Hill & Wolfe, 1995; Wolfe, 1998, 2014)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call