Abstract

Introduction: Alzheimer’s disease (AD) is the most prevalent type of dementia and can cause abnormal cognitive function and progressive loss of essential life skills. Early screening is thus necessary for the prevention and intervention of AD. Speech dysfunction is an early onset symptom of AD patients. Recent studies have demonstrated the promise of automated acoustic assessment using acoustic or linguistic features extracted from speech. However, most previous studies have relied on manual transcription of text to extract linguistic features, which weakens the efficiency of automated assessment. The present study thus investigates the effectiveness of automatic speech recognition (ASR) in building an end-to-end automated speech analysis model for AD detection. Methods: We implemented three publicly available ASR engines and compared the classification performance using the ADReSS-IS2020 dataset. Besides, the SHapley Additive exPlanations algorithm was then used to identify critical features that contributed most to model performance. Results: Three automatic transcription tools obtained mean word error rate texts of 32%, 43%, and 40%, respectively. These automated texts achieved similar or even better results than manual texts in model performance for detecting dementia, achieving classification accuracies of 89.58%, 83.33%, and 81.25%, respectively. Conclusion: Our best model, using ensemble learning, is comparable to the state-of-the-art manual transcription-based methods, suggesting the possibility of an end-to-end medical assistance system for AD detection with ASR engines. Moreover, the critical linguistic features might provide insight into further studies on the mechanism of AD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call