Abstract

Deep neural networks (DNNs) have been widely applied in speech recognition and enhancement. In this paper we present some experiments using deep rectifier neural networks for speech denoising. Rectified linear units (ReLUs) can make a sparse connection between hidden layers. We analyze the usage of regularization coefficient during training to encourage more sparseness. This method further improves the generalization ability of the DNN regression model in unseen noisy conditions. After pruning and retraining the sparse network, the computation and storage load can be largely reduced without degradation in performance, making it easier to deploy speech denoising DNNs on portable devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call