Abstract

It is common to model responses to surveys within latent variable frameworks (e.g., item response theory [IRT], confirmatory factor analysis [CFA]) and use model fit indices to evaluate model-data congruence. Unfortunately, research shows that people occasionally engage in careless responding (CR) when completing online surveys. While CR has the potential to negatively impact model fit, this issue has not been systematically explored. To better understand the CR-fit linkage, two studies were conducted. In study 1, participants' response behaviors were experimentally shaped and used to embed aspects of a comprehensive simulation (study 2) with empirically informed data. For this simulation, 144 unique conditions (which varied the sample size, number of items, CR prevalence, CR severity, and CR type), two latent variable models (IRT, CFA), and six model fit indices (χ2, RMSEA, SRMSR [CFA] and M2, RMSEA, SRMSR [IRT]), were examined. The results indicated that CR deteriorates model fit under most circumstances, though these effects are nuanced, variable, and contingent on many factors. These findings can be leveraged by researchers and practitioners to improve survey methods, obtain more accurate survey results, develop more precise theories, and enable more justifiable data-driven decisions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call