Abstract

Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.

Highlights

  • We constantly interact with objects in our environment, like reaching for a mug or grasping a pen

  • We investigated the use of allocentric information for memory-guided reaching by using naturalistic, complex scenes which are closer to real-life situations than simple laboratory task using abstract stimuli

  • We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant

Read more

Summary

Introduction

We constantly interact with objects in our environment, like reaching for a mug or grasping a pen. Allocentric reference frames contribute to the encoding of movement targets (Diedrichsen et al, 2004; Krigolson and Heath, 2004; Obhi and Goodale, 2005; Krigolson et al, 2007; Byrne and Crawford, 2010). There is evidence that allocentric coding is stronger for memory-guided than visually-guided reaching movements since they provide more stable, spatial information which can compensate for a rapid decline of visual target information (Bridgeman et al, 1997; Obhi and Goodale, 2005; Hay and Redon, 2006; Chen et al, 2011). Allocentric coding schemes do contribute to visually-guided reaching (Taghizadeh and Gail, 2014) supporting the notion of a combined use of egocentric and allocentric reference frames for visuallyguided and memory-guided reaching movements

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call