Abstract

Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit of a single model in isolation has been often studied, case influence on model selection results is greatly underappreciated in psychology. This article introduces the issue of case influence on model selection and proposes 3 influence diagnostics for commonly used selection indices: the chi-square difference test, Bayesian information criterion, and Akaike's information criterion. These 3 diagnostics can be obtained simply from the byproducts of full information maximum likelihood estimation without heavy computational burden. We provide practical information on the interpretation and behavior of these diagnostics for applied researchers and provide software code to facilitate their use. Simulated and empirical examples involving different kinds of model comparison scenarios encountered in cross-sectional, longitudinal, and multilevel research as well as involving different kinds of outcome distributions illustrate the generality of the proposed diagnostics. An awareness of how cases influence model selection results is shown to aid researchers in understanding how representative their sample level results are at the case level.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call