Abstract

Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants’ accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found.

Highlights

  • Living in complex, large-scale environments makes it vital for us to continuously update our knowledge about our body’s location in space relative to various frames of reference

  • Participants could not see their body in the immersive virtual reality (IVR) conditions as compared to the reality conditions where it is was in their field of view, which could lead to poorer performance in the visual conditions

  • The present study indicates that precision in re-executing own body rotation remarkably decreases with larger rotations either when participants are deprived of vision or when their proprioception is manipulated to be unreliable

Read more

Summary

Introduction

Large-scale environments makes it vital for us to continuously update our knowledge about our body’s location in space relative to various frames of reference. This involves processing and integrating information from different sensory modalities, including proprioception (the sense of the position and movement of our body in space) and vision. The contributions of vision and proprioception to retrieving information about own body location and movement at different stages of remembering, i.e., encoding, storage, and recall phases, are unknown Understanding these complex relationships is crucial for the development of technologies that facilitate human navigation and to inform motor rehabilitation programs.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call