Abstract

Advances in natural language processing (NLP), speech recognition, and machine learning (ML) allow the exploration of linguistic and acoustic changes previously difficult to measure. We developed processes for deriving lexical-semantic and acoustic measures as Alzheimer's disease (AD) digital voice biomarkers. We collected connected speech, neuropsychological, neuroimaging, and cerebrospinal fluid (CSF) AD biomarker data from 92 cognitively unimpaired (40 Aβ+) and 114 impaired (63 Aβ+) participants. Acoustic and lexical-semantic features were derived from audio recordings using ML approaches. Lexical-semantic (area under the curve [AUC] = 0.80) and acoustic (AUC = 0.77) scores demonstrated higher diagnostic performance for detecting MCI compared to Boston Naming Test (AUC = 0.66). Only lexical-semantic scores detected amyloid-β status (p = 0.0003). Acoustic scores associated with hippocampal volume (p = 0.017) while lexical-semantic scores associated with CSF amyloid-β (p = 0.007).Both measures were significantly associated with 2-year disease progression. These preliminary findings suggest that derived digital biomarkers may identify cognitive impairment in preclinical and prodromal AD, and may predict disease progression. This study derived lexical-semantic and acoustics features as Alzheimer's disease (AD) digital biomarkers.These features were derived from audio recordings using machine learning approaches.Voice biomarkers detected cognitive impairment and amyloid-β status in early stages of AD.Voice biomarkers may predict Alzheimer's disease progression.These markers significantly mapped to functional connectivity in AD-susceptible brain regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call