Abstract

Alfred Yarbus (1967) provided arguably the first evidence that task instructions and other transient shifts to observers' cognitive state produce corollary shifts in eye movement patterns. Subsequent research also showed that similar shifts in eye movement behavior patterns could also result from developmental disease or brain injury (Karatekin, 2007). Researchers in a new and growing interdisciplinary field-"Eye Movement Biometrics"-have capitalized on these insights by combining them with machine learning techniques. Their goal is to exploit the relationship Yarbus identified between cognition and eye movements in order to 'decode' observer cognition from eye movement behavior. This would allow eye movements to serve as fast, efficient 'windows to the soul': facilitating both user intention recognition in human- machine interfaces and non-invasive cognitive health status monitoring in medical diagnostic systems. Eye movement biometric analyses have been used to recover cognitive state information from observer eye movement behavior in a number of studies (Tseng et al., 2013; Boisvert & Bruce, 2016). However, there has also been a reported failure to decode task from eye movement (Greene, Liu & Wolfe, 2012). We argue that these mixed results stem from an inaccurate model of the relationships Yarbus identified. Research suggests that eye movement behavior is the result of a set of complex relationships between observers, environments, memory, and the predictive activity of the brain enacted over many simultaneous time-scales (Aks, 2009; Aks, Zelinsky, & Sprott, 2002; Anderson, Bischof, Laidlaw, Risko & Kingstone, 2013; Richardson & Dale, 2005). The correct eye movement biometric modeling approach is therefore not classification of individual eye movement records into discrete categories using static features, but trajectory prediction within a cognitive-behavioral phase-space via regression. Here we describe a novel method capable of producing this conceptual and technical transition. The most important difference between our approach and existing eye movement biometric techniques is that we model cognition-mediated relationships between the visual system and the environment as points within a low-dimensional embedding space learned by a generative neural network model. This allows us to express sequential data records as points within predictable dynamic trajectories, as the model contains meaningful representations of all possible data points. We here describe the analytical pipeline changes required to create such a model, and demonstrate its arguably most desirable feature: preserving the ability to accurately 'decode' the same set of discrete targets of interest as other eye movement biometric projects (task and neuropsychological status) while using fewer features. However, we also report several decoding failures for some of our classification targets. We therefore next discuss the unique set of tools it provides for contextualizing or even overcoming such failures. The first of these is its ability to convert sequences of trial-level embeddings within subjects into single points within a higher-level temporal embedding space using the same analytical pipeline, allowing us to 'zoom' into or out of different time-scales within the data. We show that by moving from individual trial-level record embeddings to subject-level record embeddings, we can significantly increase classification accuracy for neuropsychological status. The second is its ability to represent task or disease-influences on behavior not as transitions between discrete states, but as shifts towards or away from regions in the space of the generative model associated with different tasks or neuropsychological conditions. Each point in these trajectories is defined in terms of a set or mixture of task or disease-class probabilities-their 'taskiness' or 'diseasiness' values-reflecting the strength of the 'pull' of a given attractor on behavior at that time. We show that it is possible to accurately predict 'taskiness' trajectories using very little data, even where discrete task classification performance was weak. Further, we provide evidence to suggest that experimental design choices have systematic influences on 'taskiness' trajectory properties. This implies a previously unobserved but straightforwardly manipulable relationship between the design and the results of eye movement biometric experiments. Together, these results suggest that our approach has the potential to radically transform eye movement biometric research and significantly extend its basic-scientific and applied utility.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.