Abstract

Text Normalization (TN) is an essential part in conversational systems like text-to-speech synthesis (TTS) and automatic speech recognition (ASR). It is a process of transforming non-standard words (NSW) into a representation of how the words are to be spoken. Existing approaches to TN are mainly rule-based or hybrid systems, which require abundant hand-crafted rules. In this paper, we treat TN as a neural machine translation problem and present a pure data-driven TN system using Transformer framework. Partial Parameter Generator (PPG) and Pointer-Generator Network (PGN) are combined in our model to improve accuracy of normalization and act as auxiliary modules to reduce the number of simple errors. The experiments demonstrate that our proposed model reaches remarkable performance on various semiotic classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call