Abstract

Discriminative training has received a lot of attention from both the machine learning and speech recognition communities. The idea behind the discriminative approach is to construct a model that distinguishes correct samples from incorrect samples, while the conventional generative approach estimates the distributions of correct samples. We propose a novel discriminative training method and apply it to a language model for reranking speech recognition hypotheses. Our proposed method has round-robin duel discrimination (R2D2) criteria in which all the pairs of sentence hypotheses including pairs of incorrect sentences are distinguished from each other, taking their error rate into account. Since the objective function is convex, the global optimum can be found through a normal parameter estimation method such as the quasi-Newton method. Furthermore, the proposed method is an expansion of the global conditional log-linear model whose objective function corresponds to the conditional random fields. Our experimental results show that R2D2 outperforms conventional methods in many situations, including different languages, different feature constructions and different difficulties.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.