Abstract

Prior work by Michael R. Dougherty and colleagues (Yu et al., 2014) shows that when a scientist monitors the p value during data collection and uses a critical p as the signal to stop collecting data, the resulting p is distorted due to Type I error-rate inflation. They argued similarly that the use of a critical Bayes factor (BF(crit)) for stopping distorts the obtained Bayes factor (BF), a position that has met with controversy. The present paper clarified that when BF(crit) is used as a stopping criterion, the sample becomes biased in that data consistent with large effects have a greater chance to be included than do other data, thus biasing the input to Bayesian inference. We report simulations of yoked pairs of scientists in which Scientist A uses BF(crit) to optionally stop, while Scientist B, sampling from the same population, stops when A stops. Thus, optional stopping is compared not to a hypothetical in which no stopping occurs, but to a situation in which B stops for reasons unrelated to the characteristics of B's sample. The results indicated that optional stopping biased the input for Bayesian inference. We also simulated the use of effect-size stabilization as a stopping criterion and found no bias in that case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call