Abstract

In most of the current speech enhancement systems, speech signals collected by microphone are used as the only input data stream to recover the clean speeches, which will be greatly affected by the acoustic noise levels. Based on the fact that the noises or mismatches do not affect different data streams in similar ways, this paper proposes a new speech enhancement framework which can make use of multi-stream information even when some data streams are not directly related to the speech waveform by employing a multi-stream model based speech filter. A new speech enhancement method is also proposed based on the acoustic and throat microphone recordings. Experimental results show that the proposed method outperforms several conventional single stream speech enhancement methods in different noisy environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call