Abstract

We extended current knowledge by examining the performance of several Bayesian model fit and comparison indices through a simulation study using the confirmatory factor analysis. Our goal was to determine whether commonly implemented Bayesian indices can detect specification errors. Specifically, we wanted to uncover any differences in detecting under-fitting or over-fitting a model. We examined a conventional Bayesian fit index (the posterior predictive p-value), approximate Bayesian fit indices (Bayesian RMSEA, CFI, and TLI), and model comparison indices (BIC and DIC). We varied the type and severity of model mis-specification, sample size, and priors. We focused on the ability of these indices to detect model under- or over-fitting. We provide practical advice for applied researchers regarding how to assess and compare models using these common indices implemented in the Bayesian framework.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.