Abstract

Careless and insufficient effort responding (C/IER) can pose a major threat to data quality and, as such, to validity of inferences drawn from questionnaire data. A rich body of methods aiming at its detection has been developed. Most of these methods can detect only specific types of C/IER patterns. However, typically different types of C/IER patterns occur within one data set and need to be accounted for. We present a model-based approach for detecting manifold manifestations of C/IER at once. This is achieved by leveraging response time (RT) information available from computer-administered questionnaires and integrating theoretical considerations on C/IER with recent psychometric modeling approaches. The approach a) takes the specifics of attentive response behavior on questionnaires into account by incorporating the distance–difficulty hypothesis, b) allows for attentiveness to vary on the screen-by-respondent level, c) allows for respondents with different trait and speed levels to differ in their attentiveness, and d) at once deals with various response patterns arising from C/IER. The approach makes use of item-level RTs. An adapted version for aggregated RTs is presented that supports screening for C/IER behavior on the respondent level. Parameter recovery is investigated in a simulation study. The approach is illustrated in an empirical example, comparing different RT measures and contrasting the proposed model-based procedure against indicator-based multiple-hurdle approaches.

Highlights

  • Research in psychology, educational and social sciences heavily relies on questionnaire data.1 Careless and insufficient effort responding (C/IER), referring to a “survey response set in which

  • We presented a model-based approach that leverages response time (RT) information for identifying careless and insufficient effort responding (C/IER)

  • This was achieved by integrating theoretical considerations on C/IER in non-cognitive assessments with recent model developments for identifying non-effortful behavior in cognitive assessment data (Ulitzsch et al, 2020)

Read more

Summary

Response Time Analyses

Due to the absence of cognitive processing required for attentively evaluating the item, retrieving relevant information, and selecting a relevant response, short RTs spent on single items can be. Previous research utilizing RTs has focused on time spent on screen and on the whole survey, classifying respondents with screen or completion times below a pre-defined threshold as showing C/IER. The thresholds are commonly defined either based on an educated guess on the minimum amount of time required for an attentive response (Huang et al, 2012; Meade & Craig, 2012) or are created using visual inspection of the RT distribution (Kroehne et al, April 2019; Wise, 2017). Meade and Craig (2012) found that completion times could well predict latent class membership, that is, could well predict whether respondents generated facet scores showing high or low association with the construct to be measured. While very short RTs or very few time spent on the survey can well be seen as indicators of C/IER, RTs above a set threshold may or may not stem from C/IER. It has been recommended to apply a sequential approach that classifies C/IER first, based on RTs and second, using response–pattern-based indicators for respondents with longer RTs (Maniaci & Rogge, 2014; Meade & Craig, 2012)

Dealing with Careless and Insufficient Effort Responding
Proposed Approach
Attentive Behavior
Careless and Insufficient Effort Behavior
Higher-Order Structures
Model Modification for Screen-Level Timing Data
Prior Distributions
Parameter Recovery
Empirical Example
Implementation of Model-Based Approaches
Implementation of Indicator-based Procedures
Results
Discussion
Limitations and Future Research
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call