Abstract
Transitivity assumption is the cornerstone of network meta-analysis (NMA). Investigating the plausibility of transitivity can unveil the credibility of NMA results. The commonness of transitivity was examined based on study dissimilarities regarding several study-level aggregate clinical and methodological characteristics reported in the systematic reviews. The present study also demonstrated the disadvantages of using multiple statistical tests to assess transitivity and compared the conclusions drawn from multiple statistical tests with those from the approach of study dissimilarities for transitivity assessment. An empirical study was conducted using 209 published systematic reviews with NMA to create a database of study-level aggregate clinical and methodological characteristics found in the tracenma R package. For each systematic review, the network of the primary outcome was considered to create a dataset with extracted study-level aggregate clinical and methodological characteristics reported in the systematic review that may act as effect modifiers. Transitivity was evaluated by calculating study dissimilarities based on the extracted characteristics to provide a measure of overall dissimilarity within and between the observed treatment comparisons. Empirically driven thresholds of low dissimilarity were employed to determine the proportion of datasets with evidence of likely intransitivity. One-way ANOVA and chi-squared test were employed for each characteristic to investigate comparison dissimilarity at a significance level of 5%. Study dissimilarities covered a wide range of possible values across the datasets. A 'likely concerning' extent of study dissimilarities, both intra-comparison and inter-comparison, dominated the analysed datasets. Using a higher dissimilarity threshold, a 'likely concerning' extent of study dissimilarities persisted for objective outcomes but decreased substantially for subjective outcomes. A likely intransitivity prevailed in all datasets; however, using a higher dissimilarity threshold resulted in few networks with transitivity for semi-objective and subjective outcomes. Statistical tests were feasible in 127 (61%) datasets, yielding conflicting conclusions with the approach of study dissimilarities in many datasets. Study dissimilarity, manifested from variations in the effect modifiers' distribution across the studies, should be expected and properly quantified. Measuring the overall study dissimilarity between observed comparisons and comparing it with a proper threshold can aid in determining whether concerns of likely intransitivity are warranted.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.