Abstract

For the Gaussian mixture learning, the expectation-maximization (EM) algorithm as well as its modified versions are widely used, but there are still two major limitations: (i). the number of components or Gaussians must be known in advance, and (ii). There is no generally accepted method for parameters initialization to prevent the algorithm being trapped in one of the local maxima of the likelihood function. In order to overcome these weaknesses, we proposed a greedy EM algorithm based on a kurtosis and skewness criterion. Specifically, we start with a single component and add one component step by step under the framework of EM algorithm in order to decrease the value of the kurtosis and skewness measure which provides an efficient index to show how well the Gaussian mixture model fits the sample data. In such a way, the number of components can be selected adaptively during the EM learning and the learning parameters can possibly escape from local maxima.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.