Abstract
Individuals with dysarthria are unable to control rapid movement of the velum leading to reduction in intelligibility, audibility, naturalness and efficiency of vocal communication. Automatic intelligibility assessment of dysarthric patients allows clinicians diagnose the impact of therapy and medication and also to plan future course of action. Earlier works have concentrated on building speaker dependent machine learning systems for intelligibility assessment, due to limited availability of data. However, a speaker independent assessment system is of greater use by clinicians. Motivated by this observation, we propose a speaker independent intelligibility assessment system which relies on a novel set of features obtained by processing the output of DeepSpeech, an end to end Speech-to-Text engine. All experiments have been performed on the Universal Access Speech database. An accuracy of 53.9% was obtained using Support Vector Machine based four-class classification system for the speaker independent scenario while the accuracy obtained for the speaker dependent scenario is 97.4%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.