Abstract

One of the puzzles of visual search is that discriminability of a single target from a single distractor poorly predicts search performance. Last year (Rosenholtz, Chan, & Balas, VSS 2009) we suggested that in crowded visual search displays, the key determinant of search performance is instead peripheral discriminability between a patch containing both target and distractors and a patch containing multiple distractors. Using a model of peripheral vision in which the visual system represents the visual input by summary statistics over each local pooling region (Balas, Nakano, & Rosenholtz, 2009), we predicted peripheral discriminability (d′) of crowded target-present and distractor-only patches, and showed that this in turn predicted the relative difficulty of a number of standard search tasks. Here, our goal is to make quantitative predictions of visual search performance using this framework. Specifically, we model both reaction time vs. set size (RT/setsize) slopes and number of fixations to find the target. To this end, we have derived the ideal saccadic targeter for the case in which the input consists of independent noisy “targetness” measurements from multiple, overlapping pooling regions. The radius of each pooling region is roughly half its eccentricity, in accordance with Bouma's Law. For crowded pooling regions, our predicted d′ allows us to compute the likelihood of observing a given amount of “targetness,” conditioned on whether or not the given pooling region contains a target. For uncrowded pooling regions, e.g. near fixation, discriminability is maximal. An additional parameter controls the amount of memory from previous fixations. The model performs well at predicting RT/setsize slopes, and reasonably well at predicting mean number of fixations to find the target. Best predictions come when the model has minimal memory. This suggests that search performance is indeed constrained by the extend to which peripheral vision can discriminate between target-present and distractor-only patches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call