Abstract

In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.

Highlights

  • Our visual system evolved to take advantage of spatial regularities in the environment to facilitate visual search

  • The success of such acquisition was assessed in the transfer phase, where the instructions regarding which color to attend were reversed

  • We observed reliable contextual cueing transfer effects for the search items that were originally presented in both task-relevant and -irrelevant colors, indicating that rapid presentation of search items induced learning of task-irrelevant invariant subsets

Read more

Summary

Introduction

Our visual system evolved to take advantage of spatial regularities in the environment to facilitate visual search. The contextguided performance was first demonstrated by Chun and Jiang (1998) who used an elegant visual search task to investigate how repeated configurations of items (contexts) could facilitate search performance. Over time, participants’ search time became faster for repeated contexts compared to the search of random contexts, a phenomenon termed as contextual cueing effect. The idea behind this finding is that repeated contexts are learned, orienting participants’ attention to the target (Chun and Jiang, 1998; Sisk et al, 2019). Put, learned spatial contexts facilitate attentional processes and improve visual search

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call