Abstract

The traditional Expectation-Maximization (EM) training of Gaussian Mixture Model (GMM) is essentially a batch mode procedure which requires the multiple data samples with the sufficient size to update the model parameters. This severely limits the deployment and adaptation of GMM in many real-time online systems since the newly observed data samples are expected to be incorporated into the system upon available via retraining the model. This paper presents a new online incremental EM training procedure of GMM, which aims to perform the EM training incrementally and so can adapt GMM online sample by sample. The proposed method is extended on two kinds of EM algorithms for GMM, namely, Split-and-Merge EM and the traditional EM. Experiments on both the synthetic data and a speech processing task show the advantages and efficiency of the new method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call