In many audio processing tasks, such as source separation, denoising or compression, it is crucial to construct realistic and flexible models to capture the physical properties of audio signals. This can be accomplished in the Bayesian framework through the use of appropriate prior distributions. In this paper, we describe a class of prior models called Gamma Markov random fields (GMRFs) to model the sparsity and the local dependency of the energies (i.e., variances) of time-frequency expansion coefficients. A GMRF model describes a non-normalised joint distribution over unobserved variance variables, where given the field the actual source coefficients are independent. Our construction ensures a positive coupling between the variance variables, so that signal energy changes smoothly over both axes to capture the temporal and spectral continuity. The coupling strength is controlled by a set of hyperparameters. Inference on the overall model is convenient because of the conditional conjugacy of all of the variables in the model, but automatic optimization of hyperparameters is crucial to obtain better fits. The marginal likelihood of the model is not available because of the intractable normalizing constant of GMRFs. In this paper, we optimize the hyperparameters of our GMRF-based audio model using contrastive divergence and compare this method to alternatives such as score matching and pseudolikelihood maximization where applicable. We present the performance of the GMRF models in denoising and single-channel source separation problems in completely blind scenarios, where all the hyperparameters are jointly estimated given only audio data.
Read full abstract