Abstract

In the recent International Congress on Peer Review in Biomedical Publication, Chalmers described three types of bias related to conduct of research, and publication and assessment of its findings: pre-publication, publication and postpublication [ 11. Pre-publication bias relates to the process of selecting a hypothesis, designing a study to test it and obtaining the funds to conduct the study. Although these activities are clearly amenable to distortion and ethical concerns, they are beyond the scope of this paper, which will focus instead on the subsequent phases related to dissemination of research findings and their interpretation. The impetus to discuss biases in publication and interpretation of research data is of course related to the notion that replicability of results is an important criterion for inferring causal associations. Epidemiologic studies may thus be broadly regarded as random “experiments” to test a given hypothesis. Consequently, if a comprehensive compilation or a sample of such studies could be obtained, an average measure of association estimating the true association could be generated. Implicit in such a notion are two assumptions: (1) that each study is unbiased, so that the average pooled result is also an unbiased representation of the true value, and (2) that the sample of studies used to derive the pooled result is a representative or unbiased sample of all studies. Publication bias can thus be regarded as a form of selection bias, as it occurs when published studies do not comprise a representative sample of a theoretical population of studies. The relationship of publication and post-publication biases to ethics is a straightforward one, as both inferences based on a biased sample of studies or biased interpretations of a representative sample of studies may have profound implications for patient care and public health practice. Publication bias can be more specifically defined as a bias that stems from allowing factors other than study quality-such as direction of findings--dictate acceptability for publication. Although publication bias should be defined as a systematic tendency to publish any one type of result, be it positive, negative or null, it stands to reason that intellectual excitement is more easily generated by a “positive” result than by unexpectedly “negative” or null results, and it does indeed seem as if most of the examples of publication bias appearing in the literature have to do with a tendency to publish positive results. One of the earliest attempts to address the issue of publication bias was a study done by Sterling 30 years ago [2]. Sterling showed that 97% of 294 studies published in four psychology journals for 1 year had rejected the null hypothesis at the TV level of 5%. He concluded that “positive” studies were more likely to be published than studies in which the null hypothesis could not be rejected. A subsequent study by Simes [3] underscored the impact of publication bias on medical decision making. Simes assessed results of published trials of therapy for ovarian cancer and multiple myeloma, and compared those with results of all trials registered in the International Cancer Research Data Bank (ICRDB), a registry of clinical trials containing the majority of NCI-funded U.S. trials, as well as some trials done outside the U.S. These trials assessed the use of either an initial alkylating agent or combination therapy in the survival of advanced ovarian cancer, and the use of either cytotoxic

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call