Prior research has shown that searching for multiple targets in a visual search task enhances distractor memory in a subsequent recognition test. Three non-mutually exclusive accounts have been offered to explain this phenomenon. The mental comparison hypothesis states that searching for multiple targets requires participants to make more mental comparisons between the targets and the distractors, which enhances distractor memory. The attention allocation hypothesis states that participants allocate more attention to distractors because a multiple-target search cue leads them to expect a more difficult search. Finally, the partial match hypothesis states that searching for multiple targets increases the amount of featural overlap between targets and distractors, which necessitates greater attention in order to reject each distractor. In two experiments, we examined these hypotheses by manipulating visual working memory (VWM) load and target-distractor similarity of AI-generated faces in a visual search (i.e., RSVP) task. Distractor similarity was manipulated using a multidimensional scaling model constructed from facial landmarks and other metadata of each face. In both experiments, distractors from multiple-target searches were recognized better than distractors from single-target searches. Experiment 2 additionally revealed that increased target-distractor similarity during search improved distractor recognition memory, consistent with the partial match hypothesis.
Read full abstract