Abstract

Contextual knowledge has been shown to facilitate object recognition. A recent study suggested that contextual integration of individual objects requires attention (Gronau & Shachar, 2014). In contrast, attention also appears to be influenced by semantic information within a real-world scene context (Hwang et al., 2011). Currently, it is unclear whether object contextual knowledge is activated when object identity information is completely task-irrelevant. To investigate whether task-irrelevant contextual information may influence attention allocation, we measured eye movements in a simple 3-object display. Participants (N=25) were first asked to fixate at the object presented at the center of the screen, with two peripheral objects presented on each side (the objects were 4° in size and the peripheral objects were 8° apart from the center object). In each object triplet, the peripheral objects were either contextually related or unrelated to the center object (e.g., a grater - center, cheese and a backpack - peripheral). After a brief delay (375-475ms), a color probe would appear on one of the peripheral objects and participants were asked to make a saccadic response as quickly as possible. While contextual information was task-irrelevant, we observed the influences of contextual relatedness depending on the stimulus onset asynchrony (SOA) between the presentation of the object array and the probe. With shorter SOA (< 425ms), faster saccadic responses were found for related compared to unrelated items, whereas the opposite was found with longer SOA (>425ms). These results reveal a dynamic effect of contextual knowledge on visual attention: related objects appear to be prioritized and capture attention at an early stage, compared to unrelated objects. Interestingly, the advantage for related objects does not last and attention is then shifted to unrelated objects. Our finding is consistent with the idea that high-level representation, in this case contextual information, can influence attention allocation. Meeting abstract presented at VSS 2016

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call