Abstract

Event Abstract Back to Event Induced Roelofs effect in reaching Bahareh Taghizadeh1 and Alexander Gail1, 2* 1 German Primate Center, Germany 2 Bernstein Center for Computational Neuroscience, Germany Goal-directed reaching requires accurate localization of the target object. But the localization of visual objects is prone to illusions induced by the visual context. In the induced Roelofs effect (IRE), the position of a task irrelevant visual object induces a shift in the localization of the visual target object. This is true when subjects have to indicate the position of the target object relative to an array of reference positions, e.g. by response keys or by pointing to it with delay. In contrast, when subjects in the same task indicate the position of the target object by pointing to it without instructed delay or by directly reaching towards it with or without delay, no IRE is induced [1]. This discrepancy was taken as evidence for separate visuospatial representations for direct sensorimotor processing compared to spatial cognitive processing [2]. Here we test if an IRE can also be induced for reaching movement towards the target stimulus, which would argue against such strictly separate processing. We asked human subjects to perform reaches towards visual target stimuli in the frontoparallel plane. Each trial started with a brief appearance of a reference array (RA) of five horizontally arranged boxes which indicated the potential target positions. Then the target and a surrounding task-irrelevant visual frame were flashed, followed by a decision array (DA) immediately after the cue/frame offset. The DA was identical to the RA, but could be placed at different positions on the screen. While keeping ocular fixation, subjects had to reach towards the DA box which corresponded to the RA box in which they had seen the flashed target. The results showed a reliable IRE for reaching. In our task the reach goal needed to be defined relative to a task-relevant object, i.e. in an object-based reference frame, while in previous experiments the reach goal was directly defined by the target objects. The results suggest that during object-based encoding of motor goal locations, the information of additional task-irrelevant objects can induce systematic mis-localizations of the reach goal. This finding argues against strictly separate visuospatial representations for direct sensorimotor processing compared to spatial cognitive processing.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.