Abstract
In his position paper, Theeuwes (2010) makes a challenge to a number of current theories of visual selective attention, which assume that what we select in the first instance is not simply bottom-up driven by properties of the stimulus, but also influenced (at least to some extent) by internal system settings that are under top-down control. In essence, Theeuwes (2010) puts forward a strong stimulusdriven view of visual selection, maintaining that the first sweep of information through the visual system is entirely driven by bottom-up stimulus salience and that top-down settings can bias visual processing only after selection of the most salient item, based on recurrent, feed-back processing. This view represents one pole of how we can conceive of visual selection. The other pole is given by a strong version of the contingent-capture hypothesis (e.g., Folk, Remington, & Johnston, 1992), which assumes that unless a bottom-up computed signal matches the top-down (goal) settings of the system, it will not ‘capture attention’; in other words, only signals that match these settings will engage selective attention—so that what is selected is entirely under two-down control. The dimension-weighting account (DWA; e.g., Found M Muller, Heller, & Ziegler, 1995) that we have developed over the past 15 years or so takes a position inbetween these extremes: consistent with Theeuwes (2010) and computational theories (e.g., Itti & Koch, 2000; Koch & Ullman, 1985; Wolfe, 1994), the DWA assumes that attentional selection is driven by an ‘overall-saliency’ or ‘master’ map of the visual array, that is: humans attend with priority to the stimulus (location) that achieves the highest activation on this map. However, we argue that this map is not simply computed in a bottom-up, stimulus-driven manner; but rather, saliency computations may be biased, in a spatially parallel manner, by top-down signals reflecting expectations of particular stimulus attributes. We refer to this account as dimension-weighting account (see Fig. 1 for an illustration of the processing architecture;
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.