Abstract

ABSTRACT Non-inferiority (NI) clinical trials are widely used to evaluate whether the new experimental treatment is not unacceptably worse than the current active-control treatment by more than a pre-specified non-inferiority margin (NI margin). However, choosing either an absolute difference [risk difference (RD)] or a relative difference [relative risk (RR) and odds ratio (OR)] to evaluate efficacy in NI clinical trials is still controversial. In this study, we aim to evaluate the performance of abovementioned three metrics for testing NI clinical trials with risk rate endpoint. Herein, extensive Monte Carlo simulations based on various parameter settings (NI margin as well as risk rates in the experimental group and active-control group) are conducted to compare the Type I error rate, statistical power, and the necessary sample size to achieve a desired power for testing NI using RD, RR, and OR. We show that testing NI using RD not only controls well the Type I error and achieves the highest statistical power but also requires the smallest sample size compared to RR and OR. In practice, however, the choice among three metrics still needs to be based upon clinical interpretations and regulatory perspectives.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.