Abstract

Natural language processing (NLP) has exhibited potential in detecting Alzheimer’s disease (AD) and related dementias, particularly due to the impact of AD on spontaneous speech. Recent research has emphasized the significance of context-based models, such as Bidirectional Encoder Representations from Transformers (BERT). However, these models often come at the expense of increased complexity and computational requirements, which are not always accessible. In light of these considerations, we propose a straightforward and efficient word2vec-based model for AD detection, and evaluate it on the Alzheimer’s Dementia Recognition through Spontaneous Speech (ADReSS) challenge dataset. Additionally, we explore the efficacy of fusing our model with classic linguistic features and compare this to other contextual models by fine-tuning BERT-based and Generative Pre-training Transformer (GPT) sequence classification models. We find that simpler models achieve a remarkable accuracy of 92% in classifying AD cases, along with a root mean square error of 4.21 in estimating Mini-Mental Status Examination (MMSE) scores. Notably, our models outperform all state-of-the-art models in the literature for classifying AD cases and estimating MMSE scores, including contextual language models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call