Abstract

The cross entropy (CE) loss function is commonly adopted for neural network language model (NNLM) training. Although this criterion is largely successful, as evidenced by the quick advance of NNLM, minimizing CE only maximizes likelihood of training data. When training data is insufficient, the generalization power of the resulting LM is limited on test data. In this paper, we propose to integrate a pairwise ranking (PR) loss with the CE loss for multi-objective training on recurrent neural network language model (RNNLM). The PR loss emphasizes discrimination between target and non-target words and also reserves probabilities for low-frequency correct words, which complements the distribution learning role of the CE loss. Combining the two losses may therefore help improve the performance of RNNLM. In addition, we incorporate multi-task learning (MTL) into the proposed multi-objective learning to regularize the primary task of RNNLM by an auxiliary task of part-of-speech (POS) tagging. The proposed approach to RNNLM learning has been evaluated on two speech recognition tasks of WSJ and AMI with encouraging results achieved on word error rate reductions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call