Abstract
The results of speaker recognition methods using vector quantization (VQ) distortion and discrete or continuous ergodic hidden Markov models (HMMs) are compared. The effectiveness of these methods is examined from the viewpoint of robustness against utterance variation such as differences in content, temporal variation, and changes in utterance speed. It is shown that the continuous HMM performs much better than the discrete HMM and its performance is close to that of the VQ distortion method. When the amount of training data is limited, however, the VQ distortion method achieves a better recognition rate than the continuous HMM. The transition information between the states is shown to contribute little to identifying the individual characteristics of a voice. An increase in the number of states or in the number of mixture components in a state both have an equal effect, and recognition performance is almost completely determined by the product of these two numbers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Electronics and Communications in Japan (Part III: Fundamental Electronic Science)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.