Abstract

Recently, a new complementary joint sparse representations (CJSR) method was proposed for monaural speech enhancement, which utilizes the relationships among speech, noise and mixture. One of the joint sparse representations (JSRs) uses the mapping relationship between mixture and speech while the other uses the mapping relationship between mixture and noise. However, since they only use the joint information and overcomplete dictionaries, there may be some confusion components between the estimated speech and noise. In this paper, a novel model with fusion process is proposed, which further using an additional composite dictionary composed of clean speech dictionary and noise dictionary as prior knowledge. When the estimated speech and noise are obtained by JSRs, they are further sparsely represented on the composite dictionary, respectively. Because that the composite dictionary has the discriminative property, the source confusion problem of previous methods is coped well. In order to take advantage of the complementary knowledge of the two estimated signals, we propose a weighting parameter based on the residuals of sparse representation on composite dictionary. If one of the residuals is smaller, the corresponding weighting is greater, and this weighting parameter varies with noise type and speaker flexibly and effectively. Experimental results show that the proposed algorithm has superior performance compared with other tested approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.