Abstract

The speech enhancement problem has made great progress with the development of deep learning, however, the current methods rarely consider the weight distribution of the convolution kernel extracted features in the network training process. In this paper, we designed a channel attention (CA) module to assign attention weights to different features of the convolution feature channel, and we embed our CA into the current mainstream deep complex convolution speech enhancement neural network (DCCRN). Particularly, the CA dynamically calculates the weight of the feature channel during the convolution process. We use the DNS-2020 data set to train the model. The experimental results show that the network with the CA module has a significant improvement in PESQ and STOI index, and generally has a certain degree of improvement at different SNR levels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call