BackgroundNetwork meta-analysis synthesises data from a number of clinical trials in order to assess the comparative efficacy of multiple healthcare interventions in similar patient populations. In situations where clinical trial data are heterogeneously reported i.e. data are missing for one or more outcomes of interest, synthesising such data can lead to disconnected networks of evidence, increased uncertainty, and potentially biased estimates which can have severe implications for decision-making. To overcome this issue, strength can be borrowed between outcomes of interest in multivariate network meta-analyses. Furthermore, in situations where there are relatively few trials informing each treatment comparison, there is a potential issue with the sparsity of data in the treatment networks, which can lead to substantial parameter uncertainty. A multivariate network meta-analysis approach can be further extended to borrow strength between interventions of the same class using hierarchical models.MethodsWe extend the trivariate network meta-analysis model to incorporate the exchangeability between treatment effects belonging to the same class of intervention to increase precision in treatment effect estimates. We further incorporate a missing data framework to estimate uncertainty in trials that did not report measures of variability in order to maximise the use of all available information for healthcare decision-making. The methods are applied to a motivating dataset in overactive bladder syndrome. The outcomes of interest were mean change from baseline in incontinence, voiding and urgency episodes. All models were fitted using Bayesian Markov Chain Monte Carlo (MCMC) methods in WinBUGS.ResultsAll models (univariate, multivariate, and multivariate models incorporating class effects) produced similar point estimates for all treatment effects. Incorporating class effects in multivariate models often increased precision in treatment effect estimates.ConclusionsMultivariate network meta-analysis incorporating class effects allowed for the comparison of all interventions across all outcome measures to ameliorate the potential impact of outcome reporting bias, and further borrowed strength between interventions belonging to the same class of treatment to increase the precision in treatment effect estimates for healthcare policy and decision-making.
Read full abstract