Abstract

Voice Conversion (VC) is a method of converting the source speaker's speech into the target speaker's speech without changing the source speaker's speech content. The current VC methods have the following problems: (1) they are only applicable to a limited number of speakers, not to any speakers, as a result, the application scenarios are greatly restricted; (2) the representation (feature) separation(RS) effect of the current mainstream technology is not ideal on the source speaker speech and the target speaker speech; and (3) the voice conversion quality of most models is unsatisfactory, and hence needs to be improved. Therefore, in this paper, we constructed a one-shot VC model of Representation Separation, called RS-VC model, implemented by the encoder-decoder structure. The encoder is composed of a content encoder and a speaker encoder. The content encoder separates the content information of the source speaker voice and generates a content representation. The speaker encoder separates the target speaker information of the target speaker voice and generates a speaker representation. The decoder synthesizes the content representation and the speaker representation to generate the converted voice. In this paper, we obtained the optimized speaker verification model SVIGEN2E (Speaker Verification with Instance Normalization using Generalized End-to-End loss) by improving the speaker verification (SV) model. The model SVIGEN2E is used as the speaker encoder. This speaker encoder needs to be trained in advance prior to RS-VC model training, and the pre-trained model of SVINGE2E directly extracts speaker representation of the target speaker's voice, and is used for training and testing RS-VC model. A progressive training method is proposed then for training RS-VC model. Experiments show that the progressive training method can effectively improve the quality of the converted voice. Compared with the basic speaker verification model, both SVINGE2E and RS-VC deliver the impressive improvements in EER (Equal Error Rate).

Highlights

  • Voice Conversion (VC) is a research branch of speech synthesis, the research history is very long

  • In order to improve the representation separation effect and extract more representative speaker representations, this paper optimizes the basic speaker verification (SV) model and obtains the optimized SV model called SVINGE2E(Speaker Verification with Instance Normalization using Generalized End-to-End loss), which achieved the highest improvement of 41.72% in Equal Error Rate (EER) over the basic speaker verification model

  • In order to improve the model training effect, this paper proposes a progressive training method to train the representation (feature) separation (RS)-VC model

Read more

Summary

Introduction

Voice Conversion (VC) is a research branch of speech synthesis, the research history is very long. It is a method of converting source speaker’s speech into target speaker's speech without changing the source speaker's speech content. The VC based on channel spectrum is mainly divided into four categories: (1) Codebook mapping-based methods [1,2,3,4], (2) Gaussian mixing model methods [5, 6, 7, 8, 9], (3) Hidden Markov model-based methods [10], (4) Neural network-based conversion methods[11,12].

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call