Abstract
This paper explores a novel emotional voice conversion (EVC) method based on the style generative adversarial network combined with dynamic fundamental frequency difference compensation. On the one hand, in terms of spectrum mapping, based on the method of EVC using star generative adversarial network, we propose to replace one-hot vectors with the emotion style features extracted from spectrum to represent emotional information. It enables the conversion not only to fully learn emotion style from the target speech, but also to transfer unseen emotion styles to a new utterance, that is, one-shot EVC. On the other hand, given that the traditional logarithm Gaussian normalization transformation and its some variants for prosody transfer are not enough to reflect the fine-grained fundamental frequency distribution difference between the different emotional speech, we propose the improved strategy of dynamic fundamental frequency difference compensation. The experimental results show that our proposed method can accomplish high quality one-shot EVC, significantly outperforming the baseline in terms of speech quality and emotional saturation in both objective and subjective evaluations.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have