Abstract

This paper presents a new, publicly available dataset, aimed to be used as a benchmark for Point of Gaze (PoG) detection algorithms. The dataset consists of two modalities that can be combined for PoG definition: (a) a set of videos recording the eye motion of human participants as they were looking at, or following, a set of predefined points of interest on a computer visual display unit (b) a sequence of 3D head poses synchronized with the video. The eye motion was recorded using a Mobile Eye-XG, head mounted, infrared monocular camera and the head position by using a set of Vicon motion capture cameras. The ground truth of the point of gaze and head location and direction in the three-dimensional space are provided together with the data. The ground truth regarding the point of gaze is known in advance since the participants are always looking at predefined targets on a monitor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call