Abstract

Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments (i.e., rendering perspective-correct binocular images with accurate motion parallax). However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced immersion and, potentially, visually-induced motion sickness. Precise requirements to achieve perceptually stable world-locked rendering (WLR) are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head and eyetracking. We present a custom-built system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. This platform is used to study acceptable errors in render camera position for WLR in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity. We conclude with an analytic model which examines changes to apparent depth and visual direction in response to camera displacement errors, and visual direction is highlighted as a potentially important consideration for WLR alongside depth errors from incorrect disparity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call