Abstract
We report the results of an experiment conducted to examine the effects of immersive viewing on a common spatial knowledge acquisition task of spatial updating task in a spherical panoramic environment (SPE). A spherical panoramic environment, such as Google Street View, is an environment that is comprised of spherical images captured at regular intervals in a real world setting augmented with virtual navigational aids such as paths, dynamic maps, and textual annotations. Participants navigated the National Mall area of Washington, DC, in Google Street View in one of two viewing conditions; desktop monitor or a head-mounted display with a head orientation tracker. In an exploration phase, participants were first asked to navigate and observe landmarks on a pre-specified path. Then, in a testing phase, participants were asked to travel the same path and to rotate their view in order to look in the direction of the perceived landmarks at certain waypoints. The angular difference between participants' gaze directions and the landmark directions was recorded. We found no significant difference between the immersive and desktop viewing conditions on participants' accuracy of direction to landmarks as well as no difference in their sense of presence scores. However, based on responses to a post-experiment questionnaire, participants in both conditions tended to use a cognitive or procedural technique to inform direction to landmarks. Taken together, these findings suggest that in both conditions where participants experience travel based on teleportation between waypoints, the visual cues available in the SPE, such as street signs, buildings and trees, seem to have a stronger influence in determining the directions to landmarks than the egocentric cues such as first-person perspective and natural head-coupled motion experienced in the immersive viewing condition.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have