Abstract

ABSTRACT Pickering and Blaszczynski’s paper (2021) claims that the problem gambling rate is inflated in paid online convenience and crowdsourced samples. However, there is a methodological flaw in their findings: they combined problem gambling rates from samples that are specific by design (e.g. at-least monthly sports bettors), and compared them to a problem gambling prevalence estimate from the general population. Pickering and Blaszczynski conflate three constructs: representativeness, bias and data quality. Data quality can be optimized through protections and checks, but these do not necessarily make samples more representative, or less biased. Many of the biases present in paid online convenience samples (e.g. self-selection biases) also apply to the gold standard of random digit dial telephone surveys, which is manifestly evident in very low response rates. These biases are also present in industry-recruited and venue-recruited samples, as well as samples of university students and treatment-seeking clients. Paid online convenience samples also have clear benefits. For example, it is possible to obtain large samples of very specific subgroups. Online surveys may reduce bias associated with self-reporting potentially stigmatizing conditions, like problem gambling. It is important not to discount research simply because it uses a paid online convenience or crowdsourced sample.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call