Abstract

Pronunciation variation is a major problem in disordered speech recognition. This paper focus on handling the pronunciation variations in dysarthric speech by forming speaker-specific lexicons. A novel approach is proposed for identifying mispronunciations made by each dysarthric speaker, using state-specific vector (SSV) of phone-cluster adaptive training (Phone-CAT) acoustic model. SSV is low-dimensional vector estimated for each tied-state where each element in a vector denotes the weight of a particular monophone. The SSV indicates the pronounced phone using its dominant weight. This property of SSV is exploited in adapting the pronunciation of a particular dysarthric speaker using speaker-specific lexicons. Experimental validation on Nemours database showed an average relative improvement of 9% across all the speakers compared to the system built with canonical lexicon. Index Terms: Dysarthric speech recognition, phone-CAT, lexical modeling, pronunciations, phone confusion matrix

Highlights

  • Clinical applications of speech technology play an important role in aiding communication for people with motor speech disorders

  • Two different experiments were performed to compare with the baseline continuous density hidden Markov model (CDHMM) (Base) system

  • This paper focuses on improving the performance of dysarthric speech recognition systems by handling pronunciation errors

Read more

Summary

Introduction

Clinical applications of speech technology play an important role in aiding communication for people with motor speech disorders. One such motor speech disorder is dysarthria, acquired secondary to stroke, traumatic brain injury, cerebral palsy etc. Some of the common characteristics of dysarthria include slurred speech, swallowing difficulty, slow speaking rate with increased effort to speak and muscle fatigue while speaking [1, 2]. All these effects affect the speech intelligibility and the social interaction ability of people with speech disorders. Acoustic models are usually built-in speaker adaptation framework [3, 4, 5]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.