Abstract

State-of-the-art predictive user models, like Goals, Operators, Methods, and Selection Rules (GOMS) or Keystroke-Level Model (KLM), do not consider human differences in information processing, and do not model user interaction by also considering visual behavior patterns of users during task execution. This can be accredited mainly to insufficient methods and approaches on how to model such interdependencies and thus correlate users’ cognitive characteristics with their interaction and visual behavior during task execution, and ultimately considering such cognitive-centered user models practically within current state-of-the-art information systems’ personalization and adaptation approaches. In this paper, we elaborate on such an endeavor and propose a seven-step, gaze-based human cognitive-centered user model as a basis of synthesizing user cognitive styles along with their interaction and visual behavior patterns. Aiming to prove the validity of the suggested user model, we applied it in the context of an eye tracking study that investigated influences of users’ human cognitive differences on their visual behavior in user authentication within traditional desktop and mixed reality contexts. Initial results of the study are also reported. CCS CONCEPTS • Human-centered computing ~ HCI theory, concepts and models ~ Empirical studies in HCI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call