Abstract

Does the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants’ memory for objects seen during visual search. We compared implicit memory for two types of objects—related-context nontargets that grabbed attention because they matched the target defining feature (i.e., color; top-down attention) and salient distractors that captured attention only because they were perceptually distracting (bottom-up attention). In Experiment 1, the salient distractor flickered, while in Experiment 2, the luminance of the salient distractor was alternated. Critically, salient and related-context nontargets produced equivalent attentional capture, yet related-context nontargets were remembered far better than salient distractors (and salient distractors were not remembered better than unrelated distractors). These results suggest that LTM depends not only on the amount of attention but also on the type of attention. Specifically, top-down attention is more effective in promoting the formation of memory traces than bottom-up attention.

Highlights

  • Does the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants’ memory for objects seen during visual search

  • What factors determine whether an item will or will not be stored in memory? On the one hand, previous research has identified many factors that influence whether something will be encoded in long-term memory (LTM)

  • Given that attention is composed of two distinct mechanisms, does the benefit of attention for memory depend on which form is engaged? Here we examined the implicit memory of objects presented during a visual search task while manipulating the type of attention

Read more

Summary

Introduction

Does the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants’ memory for objects seen during visual search. We compared the implicit memory of two types of objects—relatedcontext nontargets that grabbed attention because they matched a target feature (top-down attention) and salient distractors that captured attention only because they were perceptually distracting (bottom-up attention). Note that capture by an object that shares a feature with a target held in memory is operationalized as top-down capture rather than priming effect (i.e., the facilitation of the processing of a stimulus due to the prior presentation of a stimulus that is perceptually or semantically related; Kristjánsson & Campana, 2010) Such a distinction is consistent with studies showing that recent exposure to an object is insufficient to elicit capture by matching distractors, and that only representations held in WM might guide attention. Better VLTM performance for one of these distractors would suggest that encoding into VLTM depends on the type of attention that is engaged

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call