Abstract

Presents a maximum-likelihood (ML) stochastic matching approach to decrease the acoustic mismatch between a test utterance and a given set of speech models so as to reduce the recognition performance degradation caused by distortions in the test utterance and/or the model set. We assume that the speech signal is modeled by a set of subword hidden Markov models (HMM) /spl Lambda//sub x/. The mismatch between the observed test utterance Y and the models /spl Lambda//sub x/ can be reduced in two ways: 1) by an inverse distortion function F/sub /spl nu//(.) that maps Y into an utterance X that matches better with the models /spl Lambda//sub x/ and 2) by a model transformation function G/sub /spl eta//(.) that maps /spl Lambda//sub x/ to the transformed model /spl Lambda//sub x/ that matches better with the utterance Y. We assume the functional form of the transformations F/sub /spl nu//(.) or G/sub /spl eta//(.) and estimate the parameters /spl nu/ or /spl eta/ in a ML manner using the expectation-maximization (EM) algorithm. The choice of the form of F/sub /spl nu//(.) or G/sub /spl eta//(.) is based on prior knowledge of the nature of the acoustic mismatch. The stochastic matching algorithm operates only on the given test utterance and the given set of speech models, and no additional training data is required for the estimation of the mismatch prior to actual testing. Experimental results are presented to study the properties of the proposed algorithm and to verify the efficacy of the approach in improving the performance of a HMM-based continuous speech recognition system in the presence of mismatch due to different transducers and transmission channels.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.