Abstract
Immersive virtual reality (VR) technology has become a popular method for fundamental and applied spatial cognition research. One challenge researchers face is emulating walking in a large-scale virtual space although the user is in fact in a small physical space. To address this, a variety of movement interfaces in VR have been proposed, from traditional joysticks to teleportation and omnidirectional treadmills. These movement methods tap into different mental processes of spatial learning during navigation, but their impacts on distance perception remain unclear. In this paper, we investigated the role of visual display, proprioception, and optic flow on distance perception in a large-scale building by manipulating four different movement methods. Eighty participants either walked in a real building, or moved through its virtual replica using one of three movement methods: VR-treadmill, VR-touchpad, and VR-teleportation. Results revealed that, first, visual display played a major role in both perceived and traversed distance estimates but did not impact environmental distance estimates. Second, proprioception and optic flow did not impact the overall accuracy of distance perception, but having only an intermittent optic flow (in the VR-teleportation movement method) impaired the precision of traversed distance estimates. In conclusion, movement method plays a significant role in distance perception but does not impact the configurational knowledge learned in a large-scale real and virtual building, and the VR-touchpad movement method provides an effective interface for navigation in VR.
Highlights
Understanding human distance perception while learning a new large-scale environment is important in explaining and modeling spatial learning and wayfinding behaviors.Handling Editor: Albert Postma (Utrecht University).Reviewers: Anne Cuperus (Leiden University), Valérie Gyselinck (Gustave Eiffel University), Lucia Cherep (Iowa State University).Most navigators begin to acquire metric and configurational knowledge on first exposure to a new environment, which improves over time (Ishikawa and Montello 2006; Montello 1998)
The linear mixed models revealed a significant effect of visual display on perceived distance estimates, b = − 0.273, SEb = 0.051, t(38) = − 5.403, p < .001, marginal R2 = 0.274
The results showed that participants who produced more accurate environmental distance judgements in the sketch map task and the map-selection task were not necessarily more accurate in estimating perceived distances (M = 1.139, SD = 0.406) and traversed distances (M = 0.868, SD = 0.277), compared to participants who produced incorrect environmental distance estimates
Summary
Most navigators begin to acquire metric and configurational knowledge (i.e., distance and direction) on first exposure to a new environment, which improves over time (Ishikawa and Montello 2006; Montello 1998) This spatial knowledge acquisition process often involves the encoding of distance between different objects and locations, which requires the integration of perceived distance, traversed distance, and environmental distance (Loomis et al 1996; Montello 1997; Sadalla and Staplin 1980). Loomis et al (1996) reported that no large systematic error was observed when blindfolded observers walked to a previously seen target in a well-lit environment
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.