Abstract

To ensure good quality delay discounting (DD) data in research recruiting via crowdsourcing platforms, including attention checks within DD tasks have become common. These attention checks are typically identical in format to the task questions but have one sensical answer (e.g., "Would you prefer $0 now or $100 in a month?"). However, the validity of these attention checks as a marker for DD or overall survey data quality has not been directly examined. To address this gap, using data from two studies (total N = 700), the validity of these DD attention checks was tested by assessing performance on other non-DD attention checks and data quality measures both specific to DD and overall survey data (e.g., providing nonsystematic DD data, responding inconsistently in questionnaires). We also tested whether failing the attention checks was associated with degree of discounting or other participant characteristics to screen for potential bias. While failing the DD attention checks was associated with a greater likelihood of nonsystematic DD data, their discriminability was inadequate, and failure was sometimes associated with individual differences (suggesting that data exclusion might introduce bias). Failing the DD attention checks was also not associated with failing other attention checks or data quality indicators. Overall, the DD attention checks do not appear to be an adequate indicator of data quality on their own, for either the DD task or surveys overall. Strategies to enhance the validity of DD attention checks and data cleaning procedures are suggested, which should be evaluated in future research. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call