Abstract
A variety of delay discounting tasks are widely used in human studies designed to quantify the degree to which individuals discount the value of delayed rewards. It is currently unknown which task(s) yields the largest proportion of valid and systematic data using standard criteria (Johnson & Bickel, 2008). The goal of this study was to compare three delay-discounting tasks on task duration and amount of valid and systematic data produced. In Experiment 1, 180 college students completed one of three tasks online (fixed alternatives, titrating, or visual analogue scale [VAS]). Invalid and nonsystematic data, identified using standard criteria, were most prevalent with the VAS (47.3% of participants). The other tasks produced more (and similar amounts of) valid and systematic data, but required more time to complete than the VAS. Viewing systematic data as more important than completion times, Experiment 2 (n = 153 college students) sought to reduce the amount of invalid datasets in the fixed-alternatives task, and compare amounts of nonsystematic data with the titrating task. Completion times were superior in the titrating task, which produced modestly more systematic data than the fixed-alternatives task. Causes of invalid and nonsystematic data are discussed, as are methods for reducing data exclusion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.