Abstract

BackgroundInappropriate and unacceptable disregard for structural equation model (SEM) testing can be traced back to: factor-analytic inattention to model testing, misapplication of the Wilkinson task force’s [Am Psychol 54:594-604, 1999] critique of tests, exaggeration of test biases, and uncomfortably-numerous model failures.DiscussionThe arguments for disregarding structural equation model testing are reviewed and found to be misguided or flawed. The fundamental test-supporting observations are: a) that the null hypothesis of the χ2 structural equation model test is not nil, but notable because it contains substantive theory claims and consequences; and b) that the amount of covariance ill fit cannot be trusted to report the seriousness of model misspecifications. All covariance-based fit indices risk failing to expose model problems because the extent of model misspecification does not reliably correspond to the magnitude of covariance ill fit – seriously causally misspecified models can fit, or almost fit.SummaryThe only reasonable research response to evidence of non-chance structural equation model failure is to diagnostically investigate the reasons for failure. Unfortunately, many SEM-based theories and measurement scales will require reassessment if we are to clear the backlogged consequences of previous deficient model testing. Fortunately, it will be easier for researchers to respect evidence pointing toward required reassessments, than to suffer manuscript rejection and shame for disrespecting evidence potentially signaling serious model misspecifications.

Highlights

  • Inappropriate and unacceptable disregard for structural equation model (SEM) testing can be traced back to: factor-analytic inattention to model testing, misapplication of the Wilkinson task force’s [Am Psychol 54:594-604, 1999] critique of tests, exaggeration of test biases, and uncomfortably-numerous model failures

  • It will be easier for researchers to respect evidence pointing toward required reassessments, than to suffer manuscript rejection and shame for disrespecting evidence potentially signaling serious model misspecifications

  • Rodgers [1] compiled a history of null hypothesis significance testing and reported a “quiet revolution” that “obviated” many of the criticisms previously leveled against testing

Read more

Summary

Discussion

Argument 1: the nil null In many testing contexts the null hypothesis is a nil hypothesis because it corresponds to no effect, no relationship, or no correlation. The “theoretical elasticity” of factors being retrospectively labeled as features common to item sets, rather than being specific theorized latent causes of items [36,37], inclined factor models to be seen as nil-null hypotheses, rather than theorized and notable-null hypotheses This theory laxity, supplemented by entrenched but weak factor rules of thumb, resulted in so many failing models that factor-based researchers readily adopted a host of test-displacing fit indices rather than address significant model failures. Neither have we listed all the indices used to detract from structural equation model testing It should be clear from the discussion preceding Argument 2 above that the amount of covariance ill fit, by any covariance-based index, cannot be trusted to correspond to the seriousness of a model’s misspecifications because it is possible for even seriously causally misspecified models to provide perfect covariance fit. Competing interests The author declares that he has no competing interests

Background
Rodgers JL
15. Savalei V
19. Box GEP
21. Box GEP
25. Box GEP
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call