Abstract
In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.
Highlights
The efficiency of visual search depends on the relative strength of feature properties between target and distracters
The reaction time for the MOVING target is noticeably shorter than for the STATIONARY target [ANOVA: F(1, 6) = 11.25, p = 0.002], and this outcome is consistent with the classical moving/stationary target asymmetry found in visual search tasks (Royden et al, 2001)
The present study examined how types of motion signals defined in different coordinate systems—retinal, relative, and spatiotopic—generate the search asymmetry effect found for moving/stationary targets
Summary
The efficiency of visual search depends on the relative strength of feature properties between target and distracters. Motion is a dominant feature in visual search as a moving target among stationary distracters is detected more rapidly and more efficiently (i.e., a flatter function of reaction time vs display set-size) than a stationary target among moving distractors (Royden et al, 2001). This asymmetry is attributed to a strong perceptual saliency of visual motion signals (Theeuwes, 1994, 1995; Rosenholtz, 1999; Wolfe, 2001) and an ability of motion signals to immediately capture observer attention (Hillstrom and Yantis, 1994; Abrams and Christ, 2006). The neural representation of retinal motion signals originates primarily from retinal input, but the representation of non-retinal motions signals are generated by the comparison and integration of retinal inputs or with sensorimotor signals (Wurtz, 2008)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.