Abstract

Accurate phonetic transcriptions are crucial for building robust acoustic models for speech recognition as well as speech synthesis applications. Phonetic transcriptions are not usually provided with speech corpora. A lexicon is used to generate phone-level transcriptions of speech corpora with sentence-level transcriptions. When lexical entries are not available, letter-to-sound (LTS) rules are used. Whether it is a lexicon or LTS, the rules for pronunciation are generic and may not match the spoken utterance. This can lead to transcription errors. The objective of this study is to address the issue of mismatch between the transcription and its acoustic realisation. In particular, the issue of vowel deletions is studied. Group-delay-based segmentation is used to determine insertion/deletion of vowels in the speech utterance. The transcriptions are corrected in the training data based on this. The corrected data are used in automatic speech recognition (ASR) and text to speech synthesis (TTS) systems. ASR and TTS systems built with the corrected transcriptions show improvements in the performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call