Abstract

Introduction: Clinically relevant information can go uncaptured in the conventional scoring of a verbal fluency test. We hypothesize that characterizing the temporal aspects of the response through a set of time related measures will be useful in distinguishing those with MCI from cognitively intact controls.Methods: Audio recordings of an animal fluency test administered to 70 demographically matched older adults (mean age 90.4 years), 28 with mild cognitive impairment (MCI) and 42 cognitively intact (CI) were professionally transcribed and fed into an automatic speech recognition (ASR) system to estimate the start time of each recalled word in the response. Next, we semantically cluster participant generated animal names and through a novel set of time-based measures, we characterize the semantic search strategy of subjects in retrieving words from animal name clusters. This set of time-based features along with standard count-based features (e.g., number of correctly retrieved animal names) were then used in a machine learning algorithm trained for distinguishing those with MCI from CI controls.Results: The combination of both count-based and time-based features, automatically derived from the test response, achieved 77% on AUC-ROC of the support vector machine (SVM) classifier, outperforming the model trained only on the raw test score (AUC, 65%), and well above the chance model (AUC, 50%).Conclusion: This approach supports the value of introducing time-based measures to the assessment of verbal fluency in the context of this generative task differentiating subjects with MCI from those with intact cognition.

Highlights

  • Relevant information can go uncaptured in the conventional scoring of a verbal fluency test

  • The verbal fluency (VF) test is administered in two different ways: (1) semantic fluency, in which participants are asked to generate words from a semantic category such as animals, fruits, or vegetables, and (2) phonemic fluency where participants must generate words that begin with a particular letter such as “F” or “S.” In the conventional scoring of VF tests, the count of uniquely generated words in the test comprises the final score

  • The nature of data acquisition in this proposed test administration framework assumes typing skills, which can be problematic for those older individuals who may have physical conditions or might be less familiar with these devices. We address this problem by introducing a computational method using an automatic speech recognition (ASR) system that automatically estimates the timestamps from the responses

Read more

Summary

Introduction

Relevant information can go uncaptured in the conventional scoring of a verbal fluency test. More efficient methods of assessment can play a crucial role in screening for the detection of cognitive decline and can potentially target a broader segment of the population at more frequent intervals. Prior research suggests that verbal fluency is a function of individuals’ age regardless of cognitive functioning and younger populations perform better in this test compared to older adults (Alenius et al, 2019; Taler et al, 2019). Within older adults with normal cognition, Farina et al (2019) highlights that within a short period of one’s life, the rate of decline in VF score is not significant. Individuals with MCI achieve lower VF score than the healthy population and their score declines faster in the same period of time

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.