Abstract

The capability of generating speech with a specific type of emotion is desired for many human-computer interaction applications. Cross-speaker emotion transfer is a common approach to generating emotional speech when speech data with emotion labels from target speakers is not available for model training. This paper presents a novel cross-speaker emotion transfer system named iEmoTTS. The system is composed of an emotion encoder, a prosody predictor, and a timbre encoder. The emotion encoder extracts the identity of emotion type and the respective emotion intensity from the mel-spectrogram of input speech. The emotion intensity is measured by the posterior probability that the input utterance carries that emotion. The prosody predictor is used to provide prosodic features for emotion transfer. The timbre encoder provides timbre-related information for the system. Unlike many other studies which focus on disentangling speaker and style factors of speech, the iEmoTTS is designed to achieve cross-speaker emotion transfer via disentanglement between prosody and timbre. Prosody is considered the primary carrier of emotion-related speech characteristics, and timbre accounts for the essential characteristics for speaker identification. Zero-shot emotion transfer, meaning that the speech of target speakers is not seen in model training, is also realized with iEmoTTS. Extensive experiments of subjective evaluation have been carried out. The results demonstrate the effectiveness of iEmoTTS compared with other recently proposed systems of cross-speaker emotion transfer. It is shown that iEmoTTS can produce speech with designated emotion types and controllable emotion intensity. With appropriate information bottleneck capacity, iEmoTTS is able to transfer emotional information to a new speaker effectively. Audio samples are publicly available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call