Abstract

With the development of deep learning, speech enhancement based on deep neural networks had made a great breakthrough. The methods based on U-Net structure achieved good denoising performance. However, part of them rely on ordinary convolution operation, which may ignore the contextual information and detailed features of input speech. To solve this issue, many studies have improved model performance by adding additional network modules, such as attention mechanism, long and short-term memory (LSTM), etc. In this work, therefore, a time-domain U-Net speech enhancement model which combines lightweight Shuffle Attention mechanism and compressed sensing loss (CS loss) is proposed. The time-domain dilated residual blocks are constructed and used for down-sampling and up-sampling in this model. The Shuffle Attention is added to the final output of the encoder for focusing on features of speech and suppressing irrelevant audio information. A new loss is defined by using the measurements of clean speech and enhanced speech based on compressed sensing, it can further remove noise in noisy speech. In the experimental part, the influence of different loss functions on model performance is proved through ablation experiments, and the effectiveness of CS loss is verified. Compared with the reference models, the proposed model can obtain higher speech quality and intelligibility scores with fewer parameters. When dealing with the noise outside the dataset, the proposed model still achieves good denoising performance, which proves that the proposed model can not only achieve a good enhancement effect, but also has good generalization ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call