Abstract

With the development of computer technology, speech synthesis techniques are becoming increasingly sophisticated. Speech cloning can be performed as a subtask of speech synthesis technology by using deep learning techniques to extract acoustic information from human voices and combine it with text to output a natural human voice. However, traditional speech cloning technology still has certain limitations; excessively large text inputs cannot be adequately processed, and the synthesized audio may include noise artifacts like breaks and unclear phrases. In this study, we add a text determination module to a synthesizer module to process words the model has not included. The original model uses fuzzy pronunciation for such words, which is not only meaningless but also affects the entire sentence. Thus, we improve the model by splitting the letters and pronouncing them separately. Finally, we also improved the preprocessing and waveform conversion modules of the synthesizer. We replace the pre-net module of the synthesizer and use an upgraded noise reduction algorithm combined with the SV2TTS framework to achieve a system with superior speech synthesis performance. Here, we focus on improving the performance of the synthesizer module to achieve higher-quality speech synthesis audio output.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call