Abstract
Voluntary attentional selection requires the match of sensory input to a stored representation of the target features. We compared the precision of attentional selection to the precision of the underlying memory representation of the target. To measure the precision of attentional selection, we used a cue-target paradigm where participants searched for a colored target. Typically, RTs are shorter at the cued compared to uncued locations when the cue has the same color as the target. In contrast, cueing effects are absent or even inverted when cue and target colors are dissimilar. By systematically varying the difference between cue and target color, we calculated a function relating cue color to cueing effects. The width of this function reflects the precision of attentional selection and was compared to the precision of judgments of the target color on a color wheel. The precision of the memory representation was far better than the precision of attentional selection. When the task was made more difficult by increasing the similarity between the target and the nontarget stimuli in the target display, the precision of attentional selection increased, but was still worse than the precision of memory. When the search task was made more difficult, we also observed that for dissimilar cue colors, RTs were slower at cued than at uncued locations (i.e., same location costs), suggesting that improvements in attentional selectivity were achieved by suppressing non-target colors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.