Abstract

In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstrate that (i) we can quantify the semantic similarity of objects and that (ii) semantically similar objects can serve as a cluster of landmarks that are allocentrically coded. Participants arranged a set of objects based on their semantic similarity. These arrangements were then entered into a similarity analysis. Based on the results, we created two semantic classes of objects, natural and man-made, that we used in a virtual reality experiment. Participants were asked to perform memory-guided reaching movements toward the initial position of a target object in a scene while either semantically congruent or incongruent landmarks were shifted. We found that the reaching endpoints systematically deviated in the direction of landmark shift. Importantly, this effect was stronger for shifts of semantically congruent landmarks. Our findings suggest that object-semantics facilitate allocentric coding by creating stable spatial configurations.

Highlights

  • Grabbing a cup of coffee, reaching for the cookie jar, or switching on the light: in our everyday lives, we seamlessly interact with objects in our environment

  • We assume that objects are represented as point in a metric, high-dimensional psychological feature space, where distances between stimuli as estimated by similarity arrangements will be related to their semantic similarity; nearer distances correspond to objects that are in the same class

  • This result can be visualized by arranging the objects using multidimensional scaling (MDS) such that the pairwise distances approximately reflect the distances in the Representational Dissimilarity Matrix (RDM) (Fig. 1B)

Read more

Summary

Introduction

Grabbing a cup of coffee, reaching for the cookie jar, or switching on the light: in our everyday lives, we seamlessly interact with objects in our environment. When we want to systematically refer to these relations, we speak of an allocentric Frame of Reference (FoR)[1,2] This can be distinguished from an egocentric FoR, where objects are represented relative to the observer, e.g., body or gaze. If participants encoded the target object in a purely egocentric FoR, a shift of objects in the scene should not systematically affect the reaching endpoints. If participants encoded the target object in an allocentric FoR, i.e. relative to the other objects, the reaching endpoints should deviate in the direction of object shift. This result was further supported for reaching movements in depth in both reality[18] and virtual reality settings[14]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.