Abstract

In this paper, we propose a novel speech enhancement method using multi-band excitation (MBE) model. MBE model is a famous and efficient way of speech coding. Motivated by high quality of its synthetic speech, we introduce the MBE model to single-channel speech enhancement system. In the MBE model, the entire frequency band is divided into several sub-bands and each sub-band is formed as voiced or unvoiced speech. In order to reconstruct speech, there are three acoustic parameters of the MBE model need to be estimated, including pitch, harmonic magnitude and voiced/unvoiced (V/UV) decision for each band. To calculate the parameters accurately, deep neural networks (DNNs) are utilized to estimate harmonic magnitude and V/UV decision. In order to learn the mapping relationship of the features deeply, different types of noise and different input signal to noise ratios (SNRs) of noisy speech are combined to form a big training set. Another parameter, pitch, is calculated from the pre-processed speech using MBE analysis method. Moreover, speech presence probability is introduced in this paper to remove residual noise further. Experimental results show that the proposed method can provide higher speech quality and intelligibility compared with some reference methods to some extent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call