Abstract

Capturing and recording immersive VR sessions performed through HMDs in explorative virtual environments may offer valuable insights on users’ behavior, scene saliency and spatial affordances. Collected data can support effort prioritization in 3D modeling workflow or allow fine-tuning of locomotion models for time-constrained experiences. The web with its recent specifications (WebVR/WebXR) represents a valid solution to enable accessible, interactive and usable tools for remote VR analysis of recorded sessions. Performing immersive analytics through common browsers however presents different challenges, including limited rendering capabilities. Furthermore, interactive inspection of large session records is often problematic due to network bandwidth or may involve computationally intensive encoding/decoding routines. This work proposes, formalizes and investigates flexible dynamic models to volumetrically capture user states and scene saliency during running VR sessions using compact approaches. We investigate image-based encoding techniques and layouts targeting interactive and immersive WebVR remote inspection. We performed several experiments to validate and assess proposed encoding models applied to existing records and within networked scenarios through direct server-side encoding, using limited storage and computational resources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.