Abstract

The assessment of speech in Cerebellar Ataxia (CA) is time-consuming and requires clinical interpretation. In this study, we introduce a fully automated objective algorithm that uses significant acoustic features from time, spectral, cepstral, and non-linear dynamics present in microphone data obtained from different repeated Consonant-Vowel (C-V) syllable paradigms. The algorithm builds machine-learning models to support a 3-tier diagnostic categorisation for distinguishing Ataxic Speech from healthy speech, rating the severity of Ataxic Speech, and nomogram-based supporting scoring charts for Ataxic Speech diagnosis and severity prediction. The selection of features was accomplished using a combination of mass univariate analysis and elastic net regularization for the binary outcome, while for the ordinal outcome, Spearman's rank-order correlation criterion was employed. The algorithm was developed and evaluated using recordings from 126 participants: 65 individuals with CA and 61 controls (i.e., individuals without ataxia or neurotypical). For Ataxic Speech diagnosis, the reduced feature set yielded an area under the curve (AUC) of 0.97 (95% CI 0.90-1), the sensitivity of 97.43%, specificity of 85.29%, and balanced accuracy of 91.2% in the test dataset. The mean AUC for severity estimation was 0.74 for the test set. The high C-indexes of the prediction nomograms for identifying the presence of Ataxic Speech (0.96) and estimating its severity (0.81) in the test set indicates the efficacy of this algorithm. Decision curve analysis demonstrated the value of incorporating acoustic features from two repeated C-V syllable paradigms. The strong classification ability of the specified speech features supports the framework's usefulness for identifying and monitoring Ataxic Speech.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call