Abstract

Patients who have undergone total laryngectomy and use electrolarynx for voice production suffer from poor intelligibility. It may lead in many cases to fear of speaking to strangers, even over the phone. Automatic Speech Recognition (ASR) systems could help patients overcome this problem in many ways. Unfortunately, even state-of-the-art ASR systems cannot provide results comparable to those of conventional speakers. The problem is mainly caused by the similarity between voiced and unvoiced phoneme pairs. In many cases, a language model can help to solve the issue, but only if the word context is sufficiently long. Therefore adjustment of acoustic data and/or acoustic model is necessary to increase recognition accuracy. In this paper, we propose voiceless phonemes elongation to improve recognition accuracy and enrich the ASR system with a model that takes this elongation into account. The idea of elongation is verified on a set of ASR experiments with artificially elongated voiceless phonemes. To enriching the ASR system, the DNN model for rescoring lattices based on phoneme duration is proposed. The new system is compared with a standard ASR. It is also verified that the ASR system created using elongated synthetic data can successfully recognize the actual elongated data pronounced by the real speaker.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.