There is growing interest in Bayesian clinical trial designs with informative prior distributions, for example for extrapolation of adult data to pediatrics, or use of external controls. While the classical Type I error is commonly used to evaluate such designs, it cannot be strictly controlled and it is acknowledged that other metrics may be more appropriate. We focus on two common situations—borrowing control data or information on the treatment contrast—and discuss several fully probabilistic metrics to evaluate the risk of false positive conclusions. Each metric requires specification of a design prior, which can differ from the analysis prior and permits understanding of the behavior of a Bayesian design under scenarios where the analysis prior differs from the true data generation process. The metrics include the average Type I error and the pre-posterior probability of a false positive result. When borrowing control data, our empirical cases demonstrate that the average Type I error is asymptotically controlled (in certain cases strictly) when the analysis and design prior coincide. We illustrate use of these Bayesian metrics with real applications, and discuss how they could facilitate discussions between sponsors, regulators and other stakeholders about the appropriateness of Bayesian borrowing designs for pivotal studies.
Read full abstract