Abstract

This paper examines the application of super-superiority margins in study power calculations. Unlike traditional power calculations, which primarily aim to reject the null hypothesis by any margin, a super-superiority margin establishes a clinically significant threshold. Despite potential benefits, this approach, akin to a non-inferiority calculation but in an opposing direction, is rarely used. Implementing a super-superiority margin separates the notion of the likely difference between two groups (the effect size) from the minimum clinically significant difference, without which inconsistent positions could be held. However, these are often used interchangeably. In an audit of 30 recent randomized controlled trial power calculations, four studies utilized the minimal acceptable difference, and nine utilized the expected difference. In the other studies, this was unclarified. In the post hoc scenario, this approach can shed light on the value of undertaking further studies, which is not apparent from the standard power calculation. The acceptance and rejection of the alternate hypothesis for super-superiority, non-inferiority, equivalence, and standard superiority studies have been compared. When a fixed minimal acceptable difference is applied, a study result will be in one of seven logical positions with regards to the simultaneous application of these hypotheses. The trend for increased trial size and the mirror approach of non-inferiority studies implies that newer interventions may be becoming less effective. Powering for superiority could counter this and ensure that a pre-trial evaluation of clinical significance has taken place, which is necessary to confirm that interventions are beneficial.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call