Abstract

This paper describes a discriminative approach that further advances the framework for Weighted Finite State Transducer (WFST) based decoding. The approach introduces additional linear models for adjusting the scores of a decoding graph composed of conventional information source models (e.g., hidden Markov models and N-gram models), and reviews the WFSTbased decoding process as a linear classifier for structured data (e.g., sequential multiclass data). The difficulty with the approach is that the number of dimensions of the additional linear models becomes very large in proportion to the number of arcs in a WFST, and our previous study only applied it to a small task (TIMIT phoneme recognition). This paper proposes a training method for a large-scale linear classifier employed in WFSTbased decoding by using a distributed perceptron algorithm. The experimental results show that the proposed approach was successfully applied to a large vocabulary continuous speech recognition task, and achieved an improvement compared with the performance of the minimum phone error based discriminative training of acoustic models. Index Terms: speech recognition, weighted finite state transducer, linear classifier, distributed perceptron, large vocabulary continuous speech recognition

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call