Abstract
This paper proposes a discriminative model for speech recognition that directly optimizes the parameters of a speech model represented in the form of a decoding graph. In the process of recognition, a decoder, given an input speech signal, searches for an appropriate label sequence among possible combinations from separate knowledge sources of speech, e.g., acoustic, lexicon, and language models. It is more reasonable to use an integrated knowledge source, which is composed of these models and forms an overall space to be searched by a decoder, than to use separate ones. This paper aims to estimate a speech model composed in this way directly in the search network, unlike discriminative training approaches, which estimate parameters in acoustic or language model layers. Our approach is formulated as the weight parameter optimization of log-linear distributions in the decoding arcs of a Weighted Finite State Transducer (WFST) to efficiently handle a large network statically. The weight parameters are estimated by an averaged perceptron algorithm. The experimental results show that, especially when the model size is small, the proposed approach provided better recognition performance than the conventional maximum likelihood and comparable to or slightly better performance than discriminative training approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.