BackgroundAdaptive platform trials allow randomized controlled comparisons of multiple treatments using a common infrastructure and the flexibility to adapt key design features during the study. Nonetheless, they have been criticized due to the potential for time trends in the underlying risk level of the population. Such time trends lead to confounding between design features and risk level, which may introduce bias favoring one or more treatments. This is particularly true when experimental treatments are not all randomized during the same time period as the control, leading to the potential for bias from non-concurrent controls.MethodsTwo analysis methods addressing this bias are stratification and adjustment. Stratification uses only comparisons between treatment cohorts randomized during identical time periods and does not use non-concurrent randomizations. Adjustment uses a modeled analysis including time period adjustment, allowing all data to be used, even from periods without concurrent randomization. We show that these competing approaches may be embedded in a common framework using network meta-analysis principles. We interpret the stages between adaptations in a platform trial as separate fixed design trials. This allows platform trials to be viewed as networks of direct randomized comparisons and indirect non-randomized comparisons. Network meta-analysis methodology can be re-purposed to aggregate the total information from a platform trial and to transparently decompose this total information into direct randomized evidence and indirect non-randomized evidence. This allows sensitivity to indirect information to be assessed and the two analysis methods to be clearly compared.ResultsSimulations of platform trials were analyzed using a network approach implemented in the netmeta package in R. The results demonstrated bias of unadjusted methods in the presence of time trends in risk level. Adjustment and stratification were both unbiased when direct evidence and indirect evidence were consistent. Network tests of inconsistency may be used to diagnose inconsistency when it exists. In an illustrative network analysis of one of the treatment comparisons from the STAMPEDE platform trial in metastatic prostate cancer, indirect comparisons using non-concurrent controls were inconsistent with the information from direct randomized comparisons. This supports the primary analysis approach of STAMPEDE, which used only direct randomized comparisons.ConclusionNetwork meta-analysis provides a natural methodology for analyzing the network of direct and indirect treatment comparisons from a platform trial. Such analyses provide transparent separation of direct and indirect evidence, allowing assessment of the impact of non-concurrent controls. We recommend time-stratified analysis of concurrently controlled comparisons for primary analyses, with time-adjusted analyses incorporating non-concurrent controls reserved for secondary analyses. However, regardless of which methodology is used, a network analysis provides a useful supplement to the primary analysis.
Read full abstract