Abstract

In this paper, we propose a cluster-based senone selection method to speed up the computation of deep neural networks (DNN) at the decoding time of automatic speech recognition (ASR) systems. In DNN-based acoustic models, the large number of senones at the output layer is one of the main causes that lead to the high computation complexity of DNNs. Inspired by the mixture selection method designed for the Gaussian mixture model (GMM)-based acoustic models, only a subset of the senones at the output layer of DNNs are selected to calculate the posterior probabilities in our proposed method. The senone selection strategy is derived by clustering acoustic features according to their transformed representations at the top hidden layer of the DNN acoustic model. Experimental results on Mandarin speech recognition tasks show that the average number of DNN parameters used for computation can be reduced by 22% and the overall speed of the recognition process can be accelerated by 13% without significant performance degradation after using our proposed method. Experimental results on the Switchboard task demonstrate that our proposed method can reduce the average number of DNN parameters used for computation by 38.8% for conventional DNN modeling and 22.7% for low-rank DNN modeling respectively with negligible performance loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call