Abstract

Sparse-representation-based single-channel source separation, which aims to recover each source's signal using its corresponding sub-dictionary, has attracted many scholars' attention. The basic premise of this model is that each sub-dictionary possesses discriminative information about its corresponding source, and this information can be used to recover almost every sample from that source. However, in a more general sense, the samples from a source are composed not only of discriminative information but also common information shared with other sources. This paper proposes learning a discriminative high-fidelity dictionary to improve the separation performance. The innovations are threefold. Firstly, an extra sub-dictionary was combined into a conventional union dictionary to ensure that the source-specific sub-dictionaries can capture only the purely discriminative information for their corresponding sources because the common information is collected in the additional sub-dictionary. Secondly, a task-driven learning algorithm is designed to optimize the new union dictionary and a set of weights that indicate how much of the common information should be allocated to each source. Thirdly, a source separation scheme based on the learned dictionary is presented. Experimental results on a human speech dataset yield evidence that our algorithm can achieve better separation performance than either state-of-the-art or traditional algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call