Abstract

In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.

Highlights

  • How is observing a ball thrown to you different from catching it? In the context of spatial coding, we want to understand whether our brain processes these two cases

  • Experiment 1 aimed to investigate whether previous findings on allocentric coding for reaching generalize to largescale and dynamic environments

  • The soccer field provided a reasonable task setting for placing contextual cues and using a dynamic action object

Read more

Summary

Introduction

How is observing a ball thrown to you different from catching it? In the context of spatial coding, we want to understand whether our brain processes these two cases . The location of an object can typically be described by three points, one on each of the three axes that represent the Euclidian space. This naturally raises the question of where we place the origin of the three-dimensional coordinate system. In an egocentric reference frame, the location of an object is encoded relative to oneself, i.e., some part of the body. This plays a pivotal role when performing actions toward objects as we need to know, for example,

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call