Abstract

AbstractBackgroundRecent studies show promising results using speech features to automatically distinguish patients with amnestic Alzheimer’s disease (AD) from healthy controls (HC). However these studies exclude atypical presentations of AD, which present with linguistic and/or behavioral impairments that clinically overlap with frontotemporal dementia (FTD). A clinically meaningful challenge is distinguishing underlying AD pathology from frontotemporal lobar degeneration (FTLD) pathology in people presenting with an FTD phenotype. We used digital speech features to automatically classify patients with clinical FTD with as to underlying AD or FTLD pathology.MethodWe extracted 90 lexical and 9 acoustic features from one‐minute picture descriptions produced by 179 participants using our previously validated automated pipelines (Table 1). All patients presented clinically with FTD and had AD or FTLD pathology based on either autopsy or CSF (AD: CSF pTau/Aβ42³0.1 and tTau/Aβ42³0.34). Extracted features included part‐of‐speech, unique words per 100 words, several lexical characteristics, and acoustic/durational measures. Support Vector Machine (SVM), Random Forest (RF), and Multilayer Perceptron (MLP) classifiers were trained for two tasks: 1) AD vs. FTLD pathology and 2) all patients vs. HC. Five feature sets were tested: Set 1 (baseline): demographics+MMSE, Set 2: speech only, Set 3: speech+demographics, Set 4: speech+MMSE, Set 5: speech+demographics+MMSE. Features were selected using the elastic net regression with 10‐fold validations; the best performance models were reported after hyperparameter tuning.ResultDistinguishing AD from FTLD, the MLP model trained with speech had the highest AUC (0.88; accuracy = 83%) (Table 2). The RF model trained with speech and MMSE showed similar performance (accuracy = 84%, AUC = 0.85). Verb counts had the highest feature importance (FI) values in the MLP model trained with speech, followed by noun length. Distinguishing all patients from HC, the MLP model trained with speech only had the highest accuracy (93%; AUC = 0.93) (Table 3). The MLP model trained with speech and MMSE showed a comparable result (accuracy = 92%, AUC = 0.94). Unique adjective counts had the highest FI in the MLP model trained with speech, followed by word length.ConclusionSpeech models outperformed baseline models and showed comparable results with speech+MMSE. Features extracted from brief speech samples are valuable in automatically distinguishing AD from FTLD pathology, and FTD from HC.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.