Meta-analysis is becoming more popular in the biomedical literature and of increasing importance to clinicians, policy makers, funding bodies, and researchers for synthesizing practice guidelines, grant justification, and making policy decisions. Meta-analysis involves a quantitative analysis of multiple study outcomes to reach conclusions regarding an intervention. Often these studies are different in their design and conduct. One method of meta-analysis endorsed by the Cochrane Collaboration that has raised significant controversy is the random-effects model, which assumes that underlying effects vary across differing study populations. Often larger studies, even if not well designed and meticulously conducted, have more information value than smaller studies. The random effects model redistributes weights in one direction only—from big to small studies—without addressing variations in study estimate related to study quality and conduct. Finally, this estimator cannot be expected to have a variance structure that is different from that of the arithmetic mean when heterogeneity is large. → See related article, page 40 In this issue of the journal, Doi1 has raised some very significant issues regarding meta-analysis models. He proposes a quality effect model that achieves variance reduction through additional information garnered from the conduct and design of the list of studies and makes the case that bias reduction is not possible, since bias cannot be quantified even after detailed assessment of the studies. This is in contrast to Greenland,2 whose aim was bias quantification and whose realization has eluded researchers, even though there has been a recent attempt.3 The concept of variance reduction through quality assessment was initially suggested by Doi and Thalib4 in 2008, and in this current paper, Doi thoroughly reviews further developments in its conceptualization and also informs readers about the software he and his colleagues have developed to help researchers perform the analysis (www.epigear.com). Recently, Shuster5 suggested that empirical weighting in meta-analysis should be abandoned, because he was able to demonstrate that empirical weighting of any form would lead to correlated weights and effect estimates, which in turn would lead to bias in the weighted estimator. The only system of weights that would not lead to such correlation was the arithmetic mean estimator, and Shuster, therefore, suggested that a solution to biased random effects models was to use unweighted (or equally weighted) estimators. The problem with this assertion is that, given empirical weighting always leads to bias, the aim of weighting has to be variance reduction, otherwise it serves no purpose at all. Indeed, Burton et al6 have stated that an unbiased estimator with large variance is also itself of no practical use. Therefore, for a weighted average to be more informative than an arithmetic mean, it has to be able to offset bias by a decrease in variance. Unfortunately, the random effects model redistributes weights away from inverse variance in the direction of equal weights and does not have any mechanism for variance reduction in the face of gross heterogeneity, and thus, is unable to offset model variance. Instead, what actually happens is that as the heterogeneity increases, the confidence interval around the weighted mean also increases to keep pace with the variance expansion. In this random effects scheme, variance reduction, which should be the goal for any weighted estimator, does not seem to be achieved. Doi has demonstrated, for the first time, how model variance reduction can be achieved by input of more individual study information into the model so that studies with an expectation of greater bias variance can be down-weighted. This reduces actual estimator variance and thus its variability, and the simulation study elegantly illustrates this. Finally, Doi demonstrates that the random effects model is a special case of the quality effects model, one in which the quality assessment has been totally uninformative. Given this finding, I conclude by questioning if the time has come for organizations such as Cochrane to seriously consider updating their methodologies. At the very least, they should make a serious scientific inquiry into the quality effects model, which has the potential to change the face of meta-analysis as we know it.
Read full abstract