Abstract

Virtual agents are typical assistance tools for navigation and interaction in Virtual Reality (VR) tour, training, education, etc. It has been demonstrated that the gaits, gestures, gazes, and positions of virtual agents are major factors that affect the user’s perception and experience for seated and standing VR. In this paper, we present a novel position-aware virtual agent locomotion method, called PAVAL, that can perform virtual agent positioning (position+orientation) in real time for room-scale VR navigation assistance. We first analyze design guidelines for virtual agent locomotion and model the problem using the positions of the user and the surrounding virtual objects. Then we conduct a one-off preliminary study to collect subjective data and present a model for virtual agent positioning prediction with fixed user position. Based on the model, we propose an algorithm to optimize the object of interest, virtual agent position, and virtual agent orientation in sequence for virtual agent locomotion. As a result, during user navigation in a virtual scene, the virtual agent automatically moves in real time and introduces virtual object information to the user. We evaluate PAVAL and two alternative methods via a user study with humanoid virtual agents in various scenes, including virtual museum, factory, and school gym. The results reveal that our method is superior to the baseline condition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.