Abstract

In this paper, an signal-to-noise ratio (SNR)-incremental stochastic matching (SISM) algorithm is proposed for robust speech recognition in noisy environments. The SISM algorithm is an extension of Sankar and Lee's (1996) stochastic matching (SM) for dealing with the distortion due to additive noise. We address two issues concerning the original maximum likelihood-based SM techniques. One concern is that the initial condition of the expectation-maximization (EM) algorithm has to be set carefully if the mismatch between training and testing is large. The other is that the performance is often limited by the newly adapted model in noise compensation instead of reaching the higher level of accuracy often obtained in clean environments. Our proposed SISM algorithm attempts to improve the initial condition and to relax the performance bound. First, the SISM algorithm provides a good initial condition making use of a set of environment-matched models. The second is a recursive operation, i.e., the reference model in each recursion is changed along the direction of SNR increment in order to push the recognition performance to that obtained at higher SNR levels. Experimental results show that the SISM algorithm provides further improvement after the best environment-matched performance has been reached, and can therefore obtain an additional discriminative power through using the speech models with higher SNR instead of retraining process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.