Abstract

Goal-directed eye and hand movements are preceded by attention shifts towards the movement targets. Whether attentional resources can be allocated independently towards multiple effector target locations or whether a single attentional system underlies target selection for multiple effectors, is controversially debated. Here, we used the TVA approach (Theory of Visual Attention, Bundesen, 1990) to measure the distribution of attentional resources before single and combined eye-hand movements. We applied a whole report paradigm in which six letters arranged in a semi-circle were briefly (17--167ms) presented. Observers (n=8) performed single eye or hand movements, or combined eye and hand movements to centrally cued locations. Gaze and finger positions were recorded with a video-based eye-tracker and a touch screen. We used letter categorization performance as a proxy of attentional capacity and modelled the data according to the TVA framework. Additionally, we used the TVA-model to estimate the probability of correct categorization at motor targets and non-targets to evaluate attention selectivity. This allowed to directly determine attention capacity (processing speed) at multiple, movement-relevant and -irrelevant locations within a single trial. Our results show that total attention capacity is constant across the different action conditions and does not increase with the number of active effectors. However, attention is predominantly allocated towards the movement-relevant locations. The data demonstrate that attentional resources can be allocated simultaneously and independently towards both, eye and finger targets during the combined movements without competition, although associated with attentional costs occurring at movement-irrelevant locations. Overall, our findings suggest that attention can indeed be allocated towards multiple effector targets in parallel, as long as sufficient attentional resources are available. They also demonstrate, for the first time, that the TVA framework can be used as a sensitive tool to measure action-related shifts of visual attention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.