ABSTRACT Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent research, we explored the impact of careless response patterns on three types of response quality indicators: model-based person fit statistics, CAT-specific statistical process control indicators, and standalone indices of carelessness. We focused on over-consistency and random responding as forms of carelessness. All three types of indicators were sensitive to carelessness, with some differences related to CAT termination rules and item bank size. Our findings suggest that person fit analysis techniques may also be useful in affective CAT contexts to evaluating response quality from a measurement model perspective, including identifying carelessness. We discuss the implications of our work for research and practice.
Read full abstract