Abstract

Stepped-wedge cluster randomized trials (SW-CRTs) are typically analyzed assuming a constant intervention effect. In practice, the intervention effect may vary as a function of exposure time, leading to biased results. The estimation of time-on-intervention (TOI) effects specifies separate discrete intervention effects for each elapsed period of exposure time since the intervention was first introduced. It has been demonstrated to produce results with minimum bias and nominal coverage probabilities in the analysis of SW-CRTs. Due to the design's staggered crossover, TOI effect variances are heteroskedastic in a SW-CRT. Accordingly, we hypothesize that alternative CRT designs will be more efficient at modeling certain TOI effects. We derive and compare the variance estimators of TOI effects between a SW-CRT, parallel CRT (P-CRT), parallel CRT with baseline (PB-CRT), and novel parallel CRT with baseline and an all-exposed period (PBAE-CRT). We also prove that the time-averaged TOI effect variance and point estimators are identical to that of the constant intervention effect in both P-CRTs and PB-CRTs. We then use data collected from a hospital disinvestment study to simulate and compare the differences in TOI effect estimates between the different CRT designs. Our results reveal that the SW-CRT has the most efficient estimator for the early TOI effect, whereas the PB-CRT typically has the most efficient estimator for the long-term and time-averaged TOI effects. Overall, the PB-CRT with TOI effects can be a more appropriate choice of CRT design for modeling intervention effects that vary by exposure time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call