Abstract

Summary In an observational study matched for observed covariates, an association between treatment received and outcome exhibited may indicate not an effect caused by the treatment, but merely some bias in the allocation of treatments to individuals within matched pairs. The evidence that distinguishes moderate biases from causal effects is unevenly dispersed among possible comparisons in an observational study: some comparisons are insensitive to larger biases than others. Intuitively, larger treatment effects tend to be insensitive to larger unmeasured biases, and perhaps matched pairs can be grouped using covariates, doses or response patterns so that groups of pairs with larger treatment effects may be identified. Even if an investigator has a reasoned conjecture about where to look for insensitive comparisons, that conjecture might prove mistaken, or, when not mistaken, it might be received sceptically by other scientists who doubt the conjecture or judge it to be too convenient in light of its success with the data at hand. In this article a test is proposed that searches for insensitive findings over many comparisons, but controls the probability of falsely rejecting a true null hypothesis of no treatment effect in the presence of a bias of specified magnitude. An example is studied in which the test considers many comparisons and locates an interpretable comparison that is insensitive to larger biases than a conventional comparison based on Wilcoxon’s signed rank statistic applied to all pairs. A simulation examines the power of the proposed test. The method is implemented in the R package dstat, which contains the example and reproduces the analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call