Abstract

AbstractBackgroundClinical trials investigating novel disease‐modifying therapies for Alzheimer’s disease (AD) are increasingly targeting participants with preclinical or early‐stage neurodegeneration. Artificial intelligence may improve ascertainment of these individuals and thus enrich clinical trial cohorts. We examined classification accuracy of machine learning analysis of speech and gaze data to distinguish memory clinic patients from controls.MethodWe recruited individuals with a clinical diagnosis of AD, mild cognitive impairment (MCI), and subjective memory complaints (SMC) from a subspecialty memory clinic, and controls from the community. Clinical diagnosis was ascertained by trained Behavioural Neurologists aided by cognitive tests and neuroimaging. Participants read a paragraph from the International Reading Speed Texts (IReST). Speech was recorded, automatically transcribed, and manually verified. Eye movements were recorded with an infrared eye‐tracker. Features extracted included lexical and acoustic parameters for speech, fixation and saccade‐related features from gaze, and novel multimodal features leveraging speech and gaze signals in combination. We explored predictive models combining using logistic regression, Gaussian Naïve Bayes classifiers, and Random Forests.ResultHere we report baseline IReST task data from 60 clinic patients (12% SMC, 30% MCI, 58% AD, mean age 73 ± 9, 52% female) and 66 controls (mean age 65 ± 9, 68% female). Best speech‐based models distinguished patients from controls with an Area Under the ROC Curve (AUC) of 0.75 (95% CI 0.72‐0.78). Best gaze models yielded an AUC of 0.73 (0.71‐0.75). Models integrating speech and gaze data yielded best results with AUC 0.78 (0.76‐0.80). We have previously reported data from a separate task where participants describe the Cookie Theft photo from the Boston Aphasia Battery while undergoing infrared eye‐tracking; the best AUC model combining speech and gaze was 0.80 (0.78 ‐ 0.92).ConclusionMachine‐learning based analysis of speech and gaze data demonstrates promising classification accuracy in distinguishing memory clinic patients from healthy controls, particularly when leveraging speech and gaze data in combination. We will explore the additional classification accuracy of combining data from multiple tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call