Abstract

ABSTRACTSample size adjustment at an interim analysis can mitigate the risk of failing to meet the study objective due to lower-than-expected treatment effect. Without modification to the conventional statistical methods, the type I error rate will be inflated, primarily caused by increasing sample size when the interim observed treatment effect is close to null or no treatment effect. Modifications to the conventional statistical methods, such as changing critical values or using weighted test statistics, have been proposed to address primarily such a scenario at the cost of flexibility or interpretability. In reality, increasing sample size when interim results indicate no or very small treatment effect could unnecessarily waste limited resource on an ineffective drug candidate. Such considerations lead to the recently increased interest in sample size adjustment based on promising interim results. The 50% conditional power principle allows sample size increase only when the unblinded interim results are promising or the conditional power is greater than 50%. The conventional unweighted test statistics and critical values can be used without inflation of type I error rate. In this paper, statistical inference following such a design is assessed. As shown in the numerical study, the bias of the conventional maximum likelihood estimate (MLE) and coverage error of its conventional confidence interval are generally small following sample size adjustment. We recommend use of conventional, MLE-based statistical inference when applying the 50% conditional power principle for sample size adjustment. In such a way, consistent statistics will be used in both hypothesis test and statistical inference.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call