Abstract

In this paper, we propose an effective technique to transplant a source speaker's emotional expression to a new target speaker's voice within an end-to-end text-to-speech (TTS) framework. We modify an expressive TTS model pre-trained using a source speaker's emotional speech database to reflect the voice characteristics of a target speaker for which only a neutral speech database is available. We set two adaptation criteria to achieve this. One criterion is to minimize the reconstruction loss between the target speaker's recorded and synthesized speech, such that the synthesized speech has the target speaker's voice characteristics. The other criterion is to minimize the emotion loss between the emotion embedding vectors extracted from the reference expressive speech and the target speaker's synthesized expressive speech, which is essential to preserve expressiveness. Since the two criteria are applied alternately in the adaptation process, we are able to avoid the kind of bias issues frequently encountered in similar tasks. The proposed adaptation technique demonstrates more effective performance compared to conventional approaches in both quantitative and qualitative evaluations.

Highlights

  • The task of generating natural speech from the input text, i.e., text-to-speech (TTS), is becoming increasingly important, as it is a key module in building human-computer interaction systems

  • We experimentally found that the style of synthesized speech becomes ambiguous as the model adaptation progresses; eventually, the output synthesized speech does not faithfully present the expressiveness style

  • MODEL ARCHITECTURE The end-to-end expressive TTS model used in this paper consists of two components: (1) an emotion encoder which outputs an expressiveness condition vector based on a reference expressive speech input, and (2) an E2E-TTS model which synthesizes expressive speech using text input and expressiveness condition vectors

Read more

Summary

INTRODUCTION

The task of generating natural speech from the input text, i.e., text-to-speech (TTS), is becoming increasingly important, as it is a key module in building human-computer interaction systems. An effective way to solve this problem is to use a technique like speaker adaptation [16]–[20], in which a baseline model is trained using a large database, adjusted to a target speaker using only a small amount of data This approach can be applied to expressiveness tasks through emotion transplantation, i.e. training an expressive TTS model using available other speaker’s expressive speech database and adjusting the pre-trained model to the target speaker’s voice [21]–[24]. The condition vectors extracted from the synthesized target speaker’s expressive speech should have an emotional style identical to that of the input condition vector These two steps are repeated alternately until the model converges.

MODEL ARCHITECTURE
19: Update MTTS with regard to Lemo
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.