According to Guided Search (and similar models), features are only conjoined once an object is attended. This assertion is supported by many experiments: e.g. conjunctions of features do not pop-out in visual search and observers are poor at judging proportions of different types of conjunctions in displays. Thus, observers appear to be insensitive to the preattentive conjunctions of features. Now, consider two versions of a triple conjunction search for red, vertical rectangular targets among distractors that could be red, green, or blue; vertical, horizontal, or oblique; and rectangular, oval, or jagged. In one condition, all 26 possible distractor types are present on each trial (set sizes: 27 and 54). In the other condition, only three distractor types are present (e.g., red oblique ovals, jagged green verticals, and blue horizontal rectangles). Critically, in each condition, each feature is evenly distributed in the display: i.e. 1/3 of items are red, 1/3 green, 1/3 blue, and similarly for orientation and shape. Since the preattentive feature maps are identical in both conditions, search performance should not differ. However, RTs are faster for the condition with only three distractor types (Grand means: 625msec vs. 835msec). How can we explain this? Perhaps the easier search was done by selecting one feature (e.g. red items) and looking for an oddball in that subset. However, in a control experiment, when the target was defined as the oddball in the otherwise homogeneous red subset, search was ~200msec slower than in the three distractor condition. Alternatively, it may be possible to reject groups of identical items even if the group is defined conjunctively. Regardless of the explanation, these data show that the preattentive conjunction of basic features speeds search even though explicit appreciation of conjunctions requires attention.
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access