651 publications found
Sort by
A framework to characterise the reproducibility of meta-analysis results with its application to direct oral anticoagulants in the acute treatment of venous thromboembolism.

The number of meta-analyses of aggregate data has dramatically increased due to the facility of obtaining data from publications and the development of free, easy-to-use, and specialised statistical software. Even when meta-analyses include the same studies, their results may vary owing to different methodological choices. Assessment of the replication of meta-analysis provides an example of the variation of effect 'naturally' observed between multiple research projects. Reproducibility of results has mostly been reported using graphical descriptive representations. A quantitative analysis of such results would enable (i) breakdown of the total observed variability with quantification of the variability generated by the replication process and (ii) identification of which variables account for this variability, such as methodological quality or the statistical analysis procedures used. These variables might explain systematic mean differences between results and dispersion of the results. To quantitatively characterise the reproducibility of meta-analysis results, a bivariate linear mixed-effects model was developed to simulate both mean results and their corresponding uncertainty. Results were assigned to several replication groups, those assessing the same studies, outcomes, treatment indication and comparisons classified in the same replication group. A nested random effect structure was used to break down the total variability within each replication group and between these groups to enable calculation of an intragroup correlation coefficient and quantification of reproducibility. Determinants of variability were investigated by modelling both mean and variance parameters using covariates. The proposed model was applied to the example of meta-analyses evaluating direct oral anticoagulants in the acute treatment of venous thromboembolism.

Relevant
Catchii: Empowering literature review screening in healthcare.

A systematic review is a type of literature review that aims to collect and analyse all available evidence from the literature on a particular topic. The process of screening and identifying eligible articles from the vast amounts of literature is a time-consuming task. Specialised software has been developed to aid in the screening process and save significant time and labour. However, the most suitable software tools that are available often come with a cost or only offer either a limited or a trial version for free. In this paper, we report the release of a new software application, Catchii, which contains all the important features of a systematic review screening application while being completely free. It supports a user at different stages of screening, from detecting duplicates to creating the final flowchart for a publication. Catchii is designed to provide a good user experience and streamline the screening process through its clean and user-friendly interface on both computers and mobile devices. All in all, Catchii is a valuable addition to the current selection of systematic review screening applications. It enables researchers without financial resources to access features found in the best paid tools, while also diminishing costs for those who have previously relied on paid applications. Catchii is available at https://catchii.org.

Open Access
Relevant
Use of multiple covariates in assessing treatment-effect modifiers: A methodological review of individual participant data meta-analyses.

Individual participant data (IPD) meta-analyses of randomised trials are considered a reliable way to assess participant-level treatment effect modifiers but may not make the best use of the available data. Traditionally, effect modifiers are explored one covariate at a time, which gives rise to the possibility that evidence of treatment-covariate interaction may be due to confounding from a different, related covariate. We aimed to evaluate current practice when estimating treatment-covariate interactions in IPD meta-analysis, specifically focusing on involvement of additional covariates in the models. We reviewed 100 IPD meta-analyses of randomised trials, published between 2015 and 2020, that assessed at least one treatment-covariate interaction. We identified four approaches to handling additional covariates: (1) Single interaction model (unadjusted): No additional covariates included (57/100 IPD meta-analyses); (2) Single interaction model (adjusted): Adjustment for the main effect of at least one additional covariate (35/100); (3) Multiple interactions model: Adjustment for at least one two-way interaction between treatment and an additional covariate (3/100); and (4) Three-way interaction model: Three-way interaction formed between treatment, the additional covariate and the potential effect modifier (5/100). IPD is not being utilised to its fullest extent. In an exemplar dataset, we demonstrate how these approaches lead to different conclusions. Researchers should adjust for additional covariates when estimating interactions in IPD meta-analysis providing they adjust their main effects, which is already widely recommended. Further, they should consider whether more complex approaches could provide better information on who might benefit most from treatments, improving patient choice and treatment policy and practice.

Open Access
Relevant
A comparison of machine learning methods to find clinical trials for inclusion in new systematic reviews from their PROSPERO registrations prior to searching and screening.

Searching for trials is a key task in systematic reviews and a focus of automation. Previous approaches required knowing examples of relevant trials in advance, and most methods are focused on published trial articles. To complement existing tools, we compared methods for finding relevant trial registrations given a International Prospective Register of Systematic Reviews (PROSPERO) entry and where no relevant trials have been screened for inclusion in advance. We compared SciBERT-based (extension of Bidirectional Encoder Representations from Transformers) PICO extraction, MetaMap, and term-based representations using an imperfect dataset mined from 3632 PROSPERO entries connected to a subset of 65,662 trial registrations and 65,834 trial articles known to be included in systematic reviews. Performance was measured by the median rank and recall by rank of trials that were eventually included in the published systematic reviews. When ranking trial registrations relative to PROSPERO entries, 296 trial registrations needed to be screened to identify half of the relevant trials, and the best performing approach used a basic term-based representation. When ranking trial articles relative to PROSPERO entries, 162 trial articles needed to be screened to identify half of the relevant trials, and the best-performing approach used a term-based representation. The results show that MetaMap and term-based representations outperformed approaches that included PICO extraction for this use case. The results suggest that when starting with a PROSPERO entry and where no trials have been screened for inclusion, automated methods can reduce workload, but additional processes are still needed to efficiently identify trial registrations or trial articles that meet the inclusion criteria of a systematic review.

Open Access
Relevant
Sensitivity analysis for the interactive effects of internal bias and publication bias in meta-analyses.

Meta-analyses can be compromised by studies' internal biases (e.g., confounding in nonrandomized studies) as well as publication bias. These biases often operate nonadditively: publication bias that favors significant, positive results selects indirectly for studies with more internal bias. We propose sensitivity analyses that address two questions: (1) "For a given severity of internal bias across studies and of publication bias, how much could the results change?"; and (2) "For a given severity of publication bias, how severe would internal bias have to be, hypothetically, to attenuate the results to the null or by a given amount?" These methods consider the average internal bias across studies, obviating specifying the bias in each study individually. The analyst can assume that internal bias affects all studies, or alternatively that it only affects a known subset (e.g., nonrandomized studies). The internal bias can be of unknown origin or, for certain types of bias in causal estimates, can be bounded analytically. The analyst can specify the severity of publication bias or, alternatively, consider a "worst-case" form of publication bias. Robust estimation methods accommodate non-normal effects, small meta-analyses, and clustered estimates. As we illustrate by re-analyzing published meta-analyses, the methods can provide insights that are not captured by simply considering each bias in turn. An R package implementing the methods is available (multibiasmeta).

Open Access
Relevant
Evaluation of statistical methods used to meta-analyse results from interrupted time series studies: A simulation study.

Interrupted time series (ITS) are often meta-analysed to inform public health and policy decisions but examination of the statistical methods for ITS analysis and meta-analysis in this context is limited. We simulated meta-analyses of ITS studies with continuous outcome data, analysed the studies using segmented linear regression with two estimation methods [ordinary least squares (OLS) and restricted maximum likelihood (REML)], and meta-analysed the immediate level- and slope-change effect estimates using fixed-effect and (multiple) random-effects meta-analysis methods. Simulation design parameters included varying series length; magnitude of lag-1 autocorrelation; magnitude of level- and slope-changes; number of included studies; and, effect size heterogeneity. All meta-analysis methods yielded unbiased estimates of the interruption effects. All random effects meta-analysis methods yielded coverage close to the nominal level, irrespective of the ITS analysis method used and other design parameters. However, heterogeneity was frequently overestimated in scenarios where the ITS study standard errors were underestimated, which occurred for short series or when the ITS analysis method did not appropriately account for autocorrelation. The performance of meta-analysis methods depends on the design and analysis of the included ITS studies. Although all random effects methods performed well in terms of coverage, irrespective of the ITS analysis method, we recommend the use of effect estimates calculated from ITS methods that adjust for autocorrelation when possible. Doing so will likely to lead to more accurate estimates of the heterogeneity variance.

Open Access
Relevant
Causally interpretable meta-analysis: Clearly defined causal effects and two case studies.

Meta-analysis is commonly used to combine results from multiple clinical trials, but traditional meta-analysis methods do not refer explicitly to a population of individuals to whom the results apply and it is not clear how to use their results to assess a treatment's effect for a population of interest. We describe recently-introduced causally interpretable meta-analysis methods and apply their treatment effect estimators to two individual-participant data sets. These estimators transport estimated treatment effects from studies in the meta-analysis to a specified target population using the individuals' potentially effect-modifying covariates. We consider different regression and weighting methods within this approach and compare the results to traditional aggregated-data meta-analysis methods. In our applications, certain versions of the causally interpretable methods performed somewhat better than the traditional methods, but the latter generally did well. The causally interpretable methods offer the most promise when covariates modify treatment effects and our results suggest that traditional methods work well when there is little effect heterogeneity. The causally interpretable approach gives meta-analysis an appealing theoretical framework by relating an estimator directly to a specific population and lays a solid foundation for future developments.

Open Access
Relevant