Abstract

Recurrent neural network language models (RNNLMs) have become an increasing popular choice for state-of-the-art speech recognition systems. RNNLMs are normally trained by minimizing the cross entropy (CE) using the stochastic gradient descent (SGD) algorithm. However, the SGD method doesn't consider the correlation between parameters and therefore can lead to unstable and slow convergence in training. Second-order optimization methods provide a possible solution to this issue. However these methods are either computationally heavy or do not have competitive performance. In this paper, a novel optimization method - stochastic natural gradient based on minimum variance assumption (SNGM) is proposed for training RNNLMs. It allows the natural gradient method to operate at a comparable training efficiency to the SGD method. By modifying the gradient according to the local curvature of the KL-divergence between current and updated probabilistic distributions, the proposed SNGM approach is shown to outperform both the SGD and limited memory BFGS methods across three tasks: Penn Treebank, Switchboard conversational speech recognition and AMI meeting room transcription in terms of both perplexity and word error rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call