Decoding eye movements from non-invasive electroencephalography (EEG) data is a challenging yet vital task for both scientific and practical purposes, especially for identifying neurodegenerative disorders like Alzheimer’s disease (AD). Our research tackles this complexity by adapting inverse reinforcement learning (IRL), a machine learning method, to infer decision-making strategies from observed behaviors. We implement this to understand the processes driving eye direction and movements during diverse cognitive tasks, providing new insights into this field. Our paper begins with a detailed description of the procedures for collecting and preprocessing EEG data related to gaze behavior. We then elaborate on the development of an IRL framework designed to predict the spatial and temporal dynamics of eye movements (scanpaths) in participants engaged in cognitive tasks of varying complexity. Our model is tailored to accommodate the complexities inherent in neural signals and the stochastic nature of human gaze patterns. Our research findings underscore IRL’s effectiveness in precisely forecasting gaze patterns based on a combination of EEG and image data. The correlation between the model’s predictions and the actual gaze behavior observed in controlled experiments reinforces the utility of IRL in cognitive neuroscience research. Notably, our IRL-EEG models demonstrated superior performance, especially in more complex cognitive tasks. We further delve into the implications of our results for enhancing the understanding of neural mechanisms that govern gaze behavior.
Read full abstract