Abstract

Clinical interventions for speech disorders aim to produce changes that are not only acoustically measurable or perceptible to trained professionals, but are also apparent to naïve listeners. Due to the difficulty of collecting ratings from a suitably large sample, few researchers evaluate speech interventions relative to this criterion. Crowdsourcing technologies could enable a shift toward a more naturalistic standard to evaluate speech interventions. This project compared 35 naïve crowdsourced listeners’ ratings against acoustic measures of speech samples collected from patients with hypokinetic dysarthria secondary to Parkinson’s Disease. The data come from a published efficacy study (Sapir et al., 2007) that documented significant acoustic changes after a period of intensive treatment, Lee Silverman Voice Treatment. Specifically, the ratio of mean F2 in /i/ versus /u/, calculated for each subject, was greater in post-treatment relative to pre-treatment samples. Mixed-effects logistic regression indicated that words elicited post-treatment, presented in randomly ordered pairs with words elicited pre-treatment, were significantly more likely to be rated “more clear” (beta = 1.24, SE =.46, p <.01). This result supports the original conclusion of Sapir et al. (2007); it also supports the validity of crowdsourcing as a means to obtain ratings of disordered speech data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call