Abstract

Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition.

Highlights

  • Researchers in spatial cognition have frequently relied on virtual reality (VR) in order to conduct experiments on human navigation [1, 2]

  • We investigated the relationships between eight spatial tasks and navigation performance in virtual reality (VR)

  • This approach was adopted in order to provide evidence for or against particular two-systems theories and to determine whether theories of navigation can be reduced to one predictor or require additional factors. Together with this confirmatory analysis, we attempted to reduce the dimensionality of the model by conducting a regularised exploratory factor analysis (REFA)

Read more

Summary

Introduction

Researchers in spatial cognition have frequently relied on virtual reality (VR) in order to conduct experiments on human navigation [1, 2]. Rather than presuming the alignment of different two-systems theories, the framework used for the present study constructs several orthogonal dimensions based on existing systems in order to predict navigation performance These dimensions consist of static and dynamic stimuli, perceived and remembered information, egocentric and allocentric reference frames, and distance and direction judgements. Easton and Sholl [56] found that rotations and translations led to different performance profiles in regularly (but not irregularly) structured arrays of objects This distinction between direction and distance may represent an additional dimension of spatial task and may be orthogonal to static/dynamic, perceived/remembered, and egocentric/allocentric dimensions. We found that an egocentric task in which participants chased a moving object predicted goal-directed navigation better than all four dimensions taken together

Participants
Procedure
Design and analysis
Dap ð3Þ
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call