Abstract

Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.

Highlights

  • The results demonstrate that machine learning (ML) models can distinguish between gaze data collected under normal and varying operating conditions

  • Gaze datasets were constructed such that they contained signatures of a single or multiple error sources

  • It was found through the decision tree-based feature selection technique that statistical attributes, such as gaze error confidence levels and interquartile ranges, are significant parameters that can be used to distinguish gaze error sources

Read more

Summary

Introduction

Gaze data obtained from eye trackers operating on various consumer platforms is frequently affected by a multitude of factors (or error sources), such as the head pose, user distance, display properties of the setup, illumination variations, and occlusions The impact of these factors on gaze data are manifested in the form of gaze estimation errors whose characteristics or distributions have not been explored adequately in contemporary gaze research [1]. Researchers attempt to improve the accuracy of eye trackers or calibration methods, while gaze error patterns are rarely analyzed and there remain many questions regarding the nature of gaze estimation errors It is not known whether the above error sources produce any particular pattern of errors or if the nature of gaze errors follows any statistical distribution or if they are random. Gaze patterns were used with clustering and classification algorithms to predict the user’s intention to perform a set of tasks in [6]

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call