Abstract

Speech and language disturbances have been observed from the early stages of Alzheimer's disease (AD), including mild cognitive impairment (MCI), and speech analysis has been expected to help as a screening tool for early detection of AD and MCI. However, the questions of whether and how automatic speech analysis, including speech recognition by a self-administered tool, can be used for such detection remain largely unexplored. In this study, we performed automatic analysis of speech data collected via a mobile application from 114 older participants during cognitive tasks. The goal was to classify AD, MCI, and cognitively normal (CN) groups by using speech features characterizing acoustic, prosodic, and linguistic aspects. First, we evaluated how accurately linguistic features could be automatically extracted from transcriptions generated by automatic speech recognition (ASR), and we found that the features were highly correlated (r = 0.92) with those extracted from manual transcriptions. Then, a machine-learning speech classifier using these features achieved 78.6% accuracy for classifying AD, MCI, and CN through nested cross-validation (AD versus CN: 91.2% accuracy; MCI versus CN: 87.6% accuracy). Our results suggest the utility and validity of using a mobile application with automatic speech analysis as a self-administered screening tool for early detection of AD and MCI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.