Abstract

We describe a multimodal dataset of paired head and eye movements acquired in controlled virtual reality environments. Our dataset includes head and eye movement for n = 25 participants who interacted with four different virtual reality environments that required coordinated head and eye behaviors. Our data collection involved two visual tracking tasks and two visual searching tasks. Each participant performed each task three times, resulting in approximately 1080 seconds of paired head and eye movement and 129,611 data samples of paired head and eye rotations per participant. This dataset enables research into predictive models of intended head movement conditioned on gaze for augmented and virtual reality experiences, as well as assistive devices like powered exoskeletons for individuals with head-neck mobility limitations. This dataset also allows biobehavioral and mechanism studies of the variability in head and eye movement across different participants and tasks. The virtual environment developed for this data collection is open sourced and thus available for others to perform their own data collection and modify the environment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.