Abstract

In power analysis for multivariable Cox regression models, variance of the estimated log-hazard ratio for the treatment effect is usually approximated by inverting the expected null information matrix. Because, in many typical power analysis settings, assumed true values of the hazard ratios are not necessarily close to unity, the accuracy of this approximation is not theoretically guaranteed. To address this problem, the null variance expression in power calculations can be replaced with one of the alternative expressions derived under the assumed true value of the hazard ratio for the treatment effect. This approach is explored analytically and by simulations in the present paper. We consider several alternative variance expressions and compare their performance to that of the traditional null variance expression. Theoretical analysis and simulations demonstrate that, whereas the null variance expression performs well in many nonnull settings, it can also be very inaccurate, substantially underestimating, or overestimating the true variance in a wide range of realistic scenarios, particularly those where the numbers of treated and control subjects are very different and the true hazard ratio is not close to one. The alternative variance expressions have much better theoretical properties, confirmed in simulations. The most accurate of these expressions has a relatively simple form. It is the sum of inverse expected event counts under treatment and under control scaled up by a variance inflation factor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call