Recently, denoising diffusion models have demonstrated remarkable performance among generative models in various domains. However, in the speech domain, there are limitations in complexity and controllability to apply diffusion models for time-varying audio synthesis. Particularly, a singing voice synthesis (SVS) task, which has begun to emerge as a practical application in the game and entertainment industries, requires high-dimensional samples with long-term acoustic features. To alleviate the challenges posed by model complexity in the SVS task, we propose HiddenSinger, a high-quality SVS system using a neural audio codec and latent diffusion models. To ensure high-fidelity audio, we introduce an audio autoencoder that can encode audio into an audio codec as a compressed representation and reconstruct the high-fidelity audio from the low-dimensional compressed latent vector. Subsequently, we use the latent diffusion models to sample a latent representation from a musical score. In addition, our proposed model is extended to an unsupervised singing voice learning framework, HiddenSinger-U, to train the model using an unlabeled singing voice dataset. Experimental results demonstrate that our model outperforms previous models regarding audio quality. Furthermore, the HiddenSinger-U can synthesize high-quality singing voices of speakers trained solely on unlabeled data.
Read full abstract