Abstract

Maximum likelihood estimation of discrete latent variable (DLV) models is usually performed by the expectation-maximization (EM) algorithm. A well-known drawback is related to the multimodality of the log-likelihood function so that the estimation algorithm can converge to a local maximum, not corresponding to the global one. We propose a tempered EM algorithm to explore the parameter space adequately for two main classes of DLV models, namely latent class and hidden Markov. We compare the proposal with the standard EM algorithm by an extensive Monte Carlo simulation study, evaluating both the ability to reach the global maximum and the computational time. We show the results of the analysis of discrete and continuous cross-sectional and longitudinal data referring to some applications of interest. All the results provide supporting evidence that the proposal outperforms the standard EM algorithm, and it significantly improves the chance to reach the global maximum. The advantage is relevant even considering the overall computing time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call