Abstract

Growth mixture models (GMMs) are prevalent for modeling unknown population heterogeneity via distinct latent classes. However, GMMs are riddled with convergence issues, often requiring researchers to atheoretically alter the model with cross-class constraints simply to obtain convergence. We discuss how within-class random effects in GMMs exacerbate convergence issues, even though these random effects rarely help answer typical research questions. That is, latent classes provide a discretization of continuous random effects, so including additional random effects within latent classes can unnecessarily complicate the model. These random effects are commonly included in order to properly specify the marginal covariance; however, random effects are inefficient for patterning a covariance matrix, resulting in estimation issues. Such a goal can be achieved more simply through covariance pattern models, which we extend to the mixture model context in this article (covariance pattern mixture models, or CPMMs). We provide evidence from theory, simulation, and an empirical example showing that employing CPMMs (even if they are misspecified) instead of GMMs can circumvent the computational difficulties that can plague GMMs, without sacrificing the ability to answer the types of questions commonly asked in empirical studies. Our results show the advantages of CPMMs with respect to improved class enumeration and less biased class-specific growth trajectories, in addition to their vastly improved convergence rates. The results also show that constraining the covariance parameters across classes in order to bypass convergence issues with GMMs leads to poor results. An extensive software appendix is included to assist researchers in running CPMMs in Mplus.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call