Abstract

Contextual cueing is a phenomenon of visual statistical learning observed in visual search tasks. Previous research has found that the degree of deviation of items from its centroid, known as variability, determines the extent of generalization for that repeated scene. Introducing variability increases dissimilarity between multiple occurrences of the same repeated layout significantly. However, current theories do not explain the mechanisms that help to overcome this dissimilarity during contextual cue learning. We propose that the cognitive system initially abstracts specific scenes into scene layouts through an automatic clustering unrelated to specific repeated scenes, and subsequently uses these abstracted scene layouts for contextual cue learning. Experiment 1 indicates that introducing greater variability in search scenes leads to a hindering in the contextual cue learning. Experiment 2 further establishes that conducting extensive visual searches involving spatial variability in entirely novel scenes facilitates subsequent contextual cue learning involving corresponding scene variability, confirming that learning clustering knowledge precedes the contextual cue learning and is independent of specific repeated scenes. Overall, this study demonstrates the existence of multiple levels of learning in visual statistical learning, where item-level learning can serve as material for layout-level learning, and the generalization reflects the constraining role of item-level knowledge on layout-level knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call