Abstract

In recent studies of speech enhancement, a deep-learning model is trained to predict clean speech spectra from the known noisy spectra of speech. Rather than using the traditional discrete Fourier transform (DFT), this paper considers other well-known transforms to generate the speech spectra for deep-learning-based speech enhancement. In addition to the DFT, seven different transforms were tested: discrete Cosine transform, discrete Sine transform, discrete Haar transform, discrete Hadamard transform, discrete Tchebichef transform, discrete Krawtchouk transform, and discrete Tchebichef-Krawtchouk transform. Two deep-learning architectures were tested: convolutional neural networks (CNN) and fully connected neural networks. Experiments were performed for the NOIZEUS database, and various speech quality and intelligibility measures were adopted for performance evaluation. The quality and intelligibility scores of the enhanced speech demonstrate that discrete Sine transformation is better suited for the front-end processing with a CNN as it outperformed the DFT in this kind of application. The achieved results demonstrate that combining two or more existing transforms could improve the performance in specific conditions. The tested models suggest that we should not assume that the DFT is optimal in front-end processing with deep neural networks (DNNs). On this basis, other discrete transformations should be taken into account when designing robust DNN-based speech processing applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call