Abstract
The ability to accurately estimate the sample size required by a stepped‐wedge (SW) cluster randomized trial (CRT) routinely depends upon the specification of several nuisance parameters. If these parameters are misspecified, the trial could be overpowered, leading to increased cost, or underpowered, enhancing the likelihood of a false negative. We address this issue here for cross‐sectional SW‐CRTs, analyzed with a particular linear‐mixed model, by proposing methods for blinded and unblinded sample size reestimation (SSRE). First, blinded estimators for the variance parameters of a SW‐CRT analyzed using the Hussey and Hughes model are derived. Following this, procedures for blinded and unblinded SSRE after any time period in a SW‐CRT are detailed. The performance of these procedures is then examined and contrasted using two example trial design scenarios. We find that if the two key variance parameters were underspecified by 50%, the SSRE procedures were able to increase power over the conventional SW‐CRT design by up to 41%, resulting in an empirical power above the desired level. Thus, though there are practical issues to consider, the performance of the procedures means researchers should consider incorporating SSRE in to future SW‐CRTs.
Highlights
A stepped-wedge (SW) cluster randomised trial (CRT) involves the sequential roll-out of an intervention across several clusters over multiple time periods, with the time period in which a cluster begins receiving the intervention determined at random
Given this commonly held belief, it may come as a surprise that a recent literature review determined that in 31% of the SW-CRTs completed by February 2015, there was no significant effect of the experimental intervention on any of the trials primary outcome measures (Grayling, Wason, & Mander, 2017a)
We consider how the sample size reestimation (SSRE) procedures perform as σc2 and σe2 are misspecified to varying degrees
Summary
A stepped-wedge (SW) cluster randomised trial (CRT) involves the sequential roll-out of an intervention across several clusters over multiple time periods, with the time period in which a cluster begins receiving the intervention determined at random. There has been a growing interest in the design, and in particular, it has become associated with scenarios in which there is a belief that the trial's experimental intervention will be effective (Brown & Lilford, 2006; Mdege, Man, Taylor (nee Brown), & Torgerson, 2011). Given this commonly held belief, it may come as a surprise that a recent literature review determined that in 31% of the SW-CRTs completed by February 2015, there was no significant effect of the experimental intervention on any of the trials primary outcome measures (Grayling, Wason, & Mander, 2017a).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.