Abstract

We propose HandyGaze, a 6-DoF gaze tracking technique for room-scale environments that can be carried out by simply holding a smartphone naturally without installing any sensors or markers in the environment. Our technique simultaneously employs the smartphone’s front and rear cameras: The front camera estimates the user’s gaze vector relative to the smartphone, while the rear camera (and depth sensor, if available) performs self-localization by reconstructing a pre-obtained 3D map of the environment. To achieve this, we implemented a prototype that works on iOS smartphones by running an ARKit-based algorithm for estimating the user’s 6-DoF head orientation. We additionally implemented a novel calibration method that offsets the user-specific deviation between the head and gaze orientations. We then conducted a user study (N=10) that measured our technique’s positional accuracy to the gaze target under four conditions, based on combinations of use with and without a depth sensor and calibration. The results show that our calibration method was able to reduce the mean absolute error of the gaze point by 27%, with an error of 0.53 m when using the depth sensor. We also report the target size required to avoid erroneous inputs. Finally, we suggest possible applications such as a gaze-based guidance application for museums.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call