Abstract

We address unsupervised audio-visual speech enhancement based on variational autoencoders (VAEs), where the prior distribution of clean speech spectrogram is simulated using an encoder-decoder architecture. At enhancement (test) time, the trained generative model (decoder) is combined with a noise model whose parameters need to be estimated. The initialization of the latent variables describing the generative process of the clean speech via the decoder, is crucial, as the overall inference problem is non-convex. This is usually done by using the output of the trained encoder given the noisy audio and clean visual data as input. Current audio-visual VAE models do not provide an effective initialization because the two modalities are tightly coupled (concatenated) in the associated architectures. To overcome this issue, we introduce the mixture of inference networks variational autoencoder (MIN-VAE). Two encoder networks input, respectively, audio and visual data, and the posterior of the latent variables is modeled as a mixture of two Gaussian distributions output from each encoder. The mixture variable is also latent, and therefore learning the optimal balance between the audio and visual encoders is unsupervised as well. By training a shared decoder, the overall network learns to adaptively fuse the two modalities. Moreover, at test time, the visual encoder, taking (clean) visual data, is used for initialization. A variational inference approach is derived to train the proposed model. Thanks to the novel inference procedure and the robust initialization, the MIN-VAE exhibits superior performance than the standard audio-only as well as audio-visual counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call