Abstract

Speech conversion has significant applications in medical, robotics, and other industries. With the rise of deep learning, CycleGAN is widely used in speech conversion technology. However, the existing CycleGAN-based methods do not consider the speech signal’s temporal and spatial features. In addition, the training of CycleGAN is difficult to converge due to the gradient disappearance problem of the generator. We propose SSCGAN, whose generator is a U-shaped encoder-decoder network that extracts the temporal and spatial features by using 1DCNN and 2DCNN in parallel. A feature fusion module based on multi-scale mixed convolution is embedded between encoder and decoder to achieve high-level fusion of spatial features and temporal features. To make the network training more stable and easier to converge, SSCGAN uses Wasserstein distance instead of the original Jensen–Shannon divergence to calculate the distance of the probability distribution, which can alleviate the gradient extinction problem for generators. In addition, SSCGAN utilizes the PatchGAN structure in the discriminator, which considers the samples’ local details by dividing them into different patches. It can improve the discriminative ability of SSCGAN. The experiment results in the nonparallel corpus database VCC 2018 show that SSCGAN is superior to existing methods such as CycleGAN-VC, StarGan-VC. In inter-gender speech conversion, the MSD of SSCGAN is decreased by 0.162 on average compared to other methods, and in intra-gender speech conversion, the MSD is decreased by 0.118 on average. In subjective evaluation, participants also think SSCGAN is the best.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.