Abstract

Autoregressive speech-to-text alignment is a critical component of neural text-to-speech (TTS) models. Commonly, autoregressive TTS models rely on an attention mechanism to train these alignments online--but they are often brittle and fail to generalize in long utterances or out-of-domain text, leading to missing or repeating words. Non-autoregressive endto end TTS models usually rely on durations extracted from external sources. Our work exploits the alignment mechanism proposed in RAD -, which can be applied to various neural TTS architectures. In our experiments, the proposed alignment learning framework improves all tested TTS architectures—both autoregressive (Flowtron and Tacotron 2) and non-autoregressive (FastPitch, FastSpeech 2, RAD-TTS). Specifically, it improves alignment convergence speed of existing attention-based mechanisms; simplifies the training pipeline; and makes models more robust to errors on long utterances. Most importantly, it also improved the perceived speech synthesis quality when subject to expert human evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call