Abstract

Replication provides a confrontation of psychological theory, not only in experimental research, but also in model-based research. Goodness of fit (GOF) of the original model to the replication data is routinely provided as meaningful evidence of replication. We demonstrate, however, that GOF obscures important differences between the original and replication studies. As an alternative, we present Bayesian prior predictive similarity checking: a tool for rigorously evaluating the degree to which the data patterns and parameter estimates of a model replication study resemble those of the original study. We apply this method to original and replication data from the National Comorbidity Survey. Both data sets yielded excellent GOF, but the similarity checks often failed to support close or approximate empirical replication, especially when examining covariance patterns and indicator thresholds. We conclude with recommendations for applied research, including registered reports of model-based research, and provide extensive annotated R code to facilitate future applications of prior predictive similarity checking.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call