Abstract

Virtual environments (VEs) afford similar interactions to those in physical environments: individuals can navigate and manipulate objects. Yet, a prerequisite for these interactions is being able to view the environment. Despite the existence of numerous scene-viewing techniques (i.e., interaction techniques that facilitate the visual perception of virtual scenes), there is no guidance to help designers choose which techniques to implement. We propose a scene taxonomy based on the visual structure and task within a VE by drawing on literature from cognitive psychology and computer vision, as well as virtual reality (VR) applications. We demonstrate how the taxonomy can be used by applying it to an accessibility problem, namely limited head mobility. We used the taxonomy to classify existing scene-viewing techniques and generate three new techniques that do not require head movement. In our evaluation of the techniques with 16 participants, we discovered that participants identified tradeoffs in design considerations such as accessibility, realism, and spatial awareness, that would influence whether they would use the new techniques. Our results demonstrate the potential of the scene taxonomy to help designers reason about the relationships between VR interactions, tasks, and environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call