Abstract

BackgroundStatistical inference based on small datasets, commonly found in precision oncology, is subject to low power and high uncertainty. In these settings, drawing strong conclusions about future research utility is difficult when using standard inferential measures. It is therefore important to better quantify the uncertainty associated with both significant and non-significant results based on small sample sizes.MethodsWe developed a new method, Bayesian Additional Evidence (BAE), that determines (1) how much additional supportive evidence is needed for a non-significant result to reach Bayesian posterior credibility, or (2) how much additional opposing evidence is needed to render a significant result non-credible. Although based in Bayesian analysis, a prior distribution is not needed; instead, the tipping point output is compared to reasonable effect ranges to draw conclusions. We demonstrate our approach in a comparative effectiveness analysis comparing two treatments in a real world biomarker-defined cohort, and provide guidelines for how to apply BAE in practice.ResultsOur initial comparative effectiveness analysis results in a hazard ratio of 0.31 with 95% confidence interval (0.09, 1.1). Applying BAE to this result yields a tipping point of 0.54; thus, an observed hazard ratio of 0.54 or smaller in a replication study would result in posterior credibility for the treatment association. Given that effect sizes in this range are not extreme, and that supportive evidence exists from a similar published study, we conclude that this problem is worthy of further research.ConclusionsOur proposed method provides a useful framework for interpreting analytic results from small datasets. This can assist researchers in deciding how to interpret and continue their investigations based on an initial analysis that has high uncertainty. Although we illustrated its use in estimating parameters based on time-to-event outcomes, BAE easily applies to any normally-distributed estimator, such as those used for analyzing binary or continuous outcomes.

Highlights

  • Statistical inference based on small datasets, commonly found in precision oncology, is subject to low power and high uncertainty

  • Fitting a Cox model, we estimate that the adjusted hazard ratio of death for patients treated with 1 L chemotherapy plus bevacizumab compared to patients only treated with 1 L chemotherapy plus cetuximab is 0.42, with a p-value of 0.11

  • A statistic based on this prior is compared to plausible effect sizes to arrive at a decision. We find this inverse Bayesian approach to be appealing and a useful way to contextualize inferential results, Analysis of Credibility (AnCred) is more difficult to interpret than Bayesian Additional Evidence (BAE), since it provides intervals of prior effect sizes that are consistent withcredible evidence of effects

Read more

Summary

Introduction

Statistical inference based on small datasets, commonly found in precision oncology, is subject to low power and high uncertainty In these settings, drawing strong conclusions about future research utility is difficult when using standard inferential measures. Statistical inference is crucial to drawing robust conclusions from data This is often done through testing a parameter estimate for “statistical significance”, using p-values and confidence intervals. There is no opportunity to learn from the Sondhi et al BMC Med Res Methodol (2021) 21:221 analysis conducted Even if reported, such findings are usually qualified as “trending towards” or “approaching” significance, which is an arbitrary designation; it does not inform how likely the hypothesis of interest is, or whether future research is worthwhile

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call