Abstract

AbstractBackgroundEye‐tracking technology is an innovative tool that holds promise for enhanced dementia screening, offering the potential of brief and quantitative assessment of cognitive functions. Critically, instruction‐less eye‐tracking tests may ameliorate some of the issues with complex test instructions and linguistic variations associated with traditional cognitive tests, and capture additional sensitive metrics of task performance. However, the extraction of relevant biomarkers from large, complex eye‐tracking datasets is non‐trivial. In this work, we introduce a novel automated way of extracting abnormal oculomotor biomarkers using machine learning from raw eye‐tracking data acquired during an instruction‐less cognitive test.MethodA free‐viewing instruction‐less cognitive battery (5 minutes) was administered to healthy controls (N=553) and patients with a range of dementias (N = 30) [Figure 1]. Our method is based on self‐supervised representation learning: a deep neural network is initially trained to solve a pretext task that has well‐defined available labels. Here the pretext task is to identify distinct tasks ‐ scene perception, reading, episodic memory for scenes ‐ in healthy individuals from eye‐tracking patterns. Figure 2 visualises some features of eye‐tracking patterns that correspond to particular tasks. Once trained, this network encodes high‐level semantic information which is useful for solving other problems of interests (e.g. dementia classification) [Figure 3]. The extent to which eye‐tracking features of patients with dementia deviate from healthy behaviour is then explored, followed by a comparison between self‐supervised and handcrafted representations on discriminating between controls and patients.ResultBased on the results of the handcrafted features, patients with dementia had significantly lower scanpath lengths than controls (z = ‐276.56, SE = 97.09, p =0.00439), consistent with less extensive and efficient scanning of the presented stimuli. The self‐supervised learning features showed higher performance in discriminating dementia patients from controls (F1 score (95% CI: [0.78, 0.82]) vs standard handcrafted features [0.62, 0.67]).ConclusionThese results suggest that instruction‐less eye‐tracking tests can detect dementia status, even in the absence of explicit task instructions. We reveal novel self‐supervised learning features that are more sensitive than handcrafted features in detecting performance differences between participants with and without dementia across a variety of eye‐tracking‐based cognitive tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.