Abstract
Hidden Markov model (HMM) is currently the most popular approach to speech recognition. However, the problems of finding a good HMM model and its optimised model parameters are still of great interest to the researchers in this area. In our previous work, we have successfully applied the genetic algorithm (GA) to the HMM training process to obtain the optimised model parameters (Chau et al. Proc. ICASSP (1997) 1727) of the HMM models. In this paper, we further extend our work and propose a new training method based on GA and Baum–Welch algorithms to obtain an HMM model with optimised number of states in the HMM models and its model parameters. In this work, we are not only able to overcome the shortcomings of the slow convergence speed of the simple GA-HMM approach. In addition, this method also finds better number of states in the HMM topology as well as its model parameters. From our experiments with the 100 words extracted from the TIMIT corpus, our method is able to find the optimal topology in all cases. In addition, the HMMs trained by our GA HMM training have a better recognition capability than the HMMs trained by the Baum–Welch algorithm. In addition, 290 words are randomly selected from the TMIIT database for testing the recognition performances of both approaches, it is found that the GA-HMM approach has a recognition rate of 95.86% while the Baum–Welch method has a recognition rate of 93.1%. This implies that the HMMs trained by our GA-HMM method are more optimised than the HMMs trained by the Baum–Welch method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.