(Can J Psychiatry 2005;50:829-831) It ain't so much the things we don't know that get us in trouble. It's the things we know that just ain't so. -Artemus Ward In a recent issue of the British Medical Journal, Moncrieff and Kirsch (1 ) concluded that Recent meta-analyses show selective serotonin reuptake inhibitors [SSRIs] have no clinically meaningful advantage over placebo and that Methodological artefacts may account for the small degree of superiority shown over placebo. Needless to say, this article generated a large number of letters to the editor, citing everything from despair about the lack of available alternatives to charges that the authors overlooked or ignored evidence about the positive effects of SSRIs, that they misinterpreted the findings and recommendations of the National Institute for Health and Clinical Excellence (NICE), that both the authors and NICE used inappropriate criteria to evaluate improvement, that the authors made erroneous assumptions about the distribution of depression, and so on. This editorial does not aim to critique the article by Moncrieff and Kirsch. Rather, it tries to explain why different people with honourable intentions can come to different conclusions regarding metaanalyses. Suffice it to say for now that their summary and recommendations are by no means accepted by all. Other metaanalyses (for example, 2-4), including one coauthored by Moncrieff (5), have supported the use of SSRIs; this article has merely brought the controversy to a head. The fact is that Moncrieff and Kirsch' s conclusions should not come as a surprise. Fourteen years ago, Greenberg and others (6) came to similar conclusions regarding the effectiveness (or rather, the lack thereof) of the older class of tricyclic antidepressants (TCAs). What some may find strange is why this debate (and similar ones regarding the effectiveness of interventions ranging from screening for breast cancer to the use of cholinesterase inhibitors in Alzheimer's disease) is still going on. After all, weren't we promised that metaanalyses would provide definitive answers to questions such as these? Metaanalysis is predicated on the assumption (or it may be more a belief and hope) that objectivity regarding the criteria used for conducting literature searches, selecting the articles to include or exclude, and abstracting and summarizing the findings would result in unbiased and unequivocal answers. In some hierarchies of evidence, metaanalyses are at the top, trumping even very large randomized controlled trials (7). Since Smith and Glass's pioneering 1977 metaanalysis of psychotherapy (8), there has been an exponential explosion in the number published in the medical and psychological literature. Doing a simple Medline and PsycLit search, using just the keyword metaanalysis, I found that there were 3 published in 1981, 422 in 1991, and 1712 in 2003. Indeed, there are international organizations, such as the Cochrane Collaboration in medicine and the Campbell Collaboration in the social sciences, devoted exclusively to conducting and publishing metaanalyses. Further, there are regularly published compendia of treatment recommendations based on their results (9). The fact is, though, that disagreements among metaanalyses of the same topic are quite common. Oxman and Guyatt found that 5 reviews about the need to treat mild hypertension all resulted in different recommendations (10), and Munsinger (11) and Kamin (12), reviewing the same articles about environmental effects on intelligence, came to diametrically opposite conclusions. Indeed, our review of the effectiveness of TCAs (13) disagreed with Greenberg and others' findings (6). The reality is that, despite the claims of true believers, metaanalysis is neither a purely objective, mechanical process nor a panacea for answering all questions. There are 2 major reasons why metaanalyses may differ with regard to the conclusions they draw: methodological considerations and interpretation. …
Read full abstract