Abstract

While searching for a goal-relevant object, an internal representation of the features necessary to identify the to-be-searched-for object (i.e., target) guides attention towards visual stimuli with matching properties. Recent evidence suggests that features that negatively define a target (i.e., negative features) also bias attentional allocation through top-down suppression. Since humans usually know what to look for, it will rarely, if ever, be the case that a negative feature defines a goal-relevant object alone. Thus, to better understand the relevance of top-down suppression, our participants searched for a target conjunctively defined by a positive (e.g., a blue bar) and a negative feature (e.g., a nonred bar) with both features realized within the same dimension (color in Experiments 1, 3 and 4, orientation in Experiment 2). Experiments 1 and 2 showed that reaction times were slower if cues with a negative feature preceded the target at the same versus a different position (i.e., validly vs. invalidly cued targets), indicating suppression. In contrast, cues with a task-irrelevant different-dimension feature elicited no significant reaction time difference between validly cued and invalidly cued trials. In addition, Experiment 3 showed that while negative cues were top-down suppressed, cues with a positive feature captured attention. This finding indicated that both positive and negative features guide visual attention through capture and suppression, respectively, during the search for a target defined by the presence of one and the absence of another feature from the same dimension. However, suppression seems to apply to the negative and all nontarget features in the task−relevant dimension. This was shown in Experiment 4, in which participants suppressed cues with a task-irrelevant color similarly to cues with a negative color.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call