Abstract

Measuring where people look in real-world tasks has never been easier but analyzing the resulting data remains laborious. One solution integrates head-mounted eye tracking with motion capture but no best practice exists regarding what calibration data to collect. Here, we compared four ~1 min calibration routines used to train linear regression gaze vector models and examined how the coordinate system, eye data used and location of fixation changed gaze vector accuracy on three trial types: calibration, validation (static fixation to task relevant locations), and task (naturally occurring fixations during object interaction). Impressively, predicted gaze vectors show ~1 cm of error when looking straight ahead toward objects during natural arms-length interaction. This result was achieved predicting fixations in a Spherical coordinate frame, from the best monocular data, and, surprisingly, depends little on the calibration routine.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.