Abstract

This article proposes a method to overcome limitations in current methods that address multiple comparisons of adaptive interventions embedded in sequential multiple assignment randomized trial (SMART) designs. Because a SMART typically consists of numerous adaptive interventions, inferential procedures based on pairwise comparisons of all may suffer a substantial loss in power after accounting for multiplicity. Meanwhile, traditional methods for multiplicity adjustments in comparing non-adaptive interventions require prior knowledge of correlation structures, which can be difficult to postulate when analyzing SMART data of adaptive interventions. To address the multiplicity issue, we propose a likelihood-based omnibus test that compares all adaptive interventions simultaneously, and apply it as a gate-keeping test for further decision making. Specifically, we consider a selection procedure that selects the adaptive intervention with the best observed outcome only when the proposed omnibus test reaches a pre-specified significance level, so as to control false positive selection. We derive the asymptotic distribution of the test statistic on which a sample size formula is based. Our simulation study confirms that the asymptotic approximation is accurate with a moderate sample size, and shows that the proposed test outperforms existing multiple comparison procedures in terms of statistical power. The simulation results also suggest that our selection procedure achieves a high probability of selecting a superior adaptive intervention. The application of the proposed method is illustrated with a real dataset from a depression management study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call