Abstract

The current work by Tiruvoipati and colleagues1Tiruvoipati R. Balasubramanian S.P. Atturu G. Peek G.J. Elbourne D. Improving the quality of reporting randomized controlled trials in cardiothoracic surgery—the way forward.J Thorac Cardiovasc Surg. 2006; 132: 233-240Abstract Full Text Full Text PDF PubMed Scopus (60) Google Scholar measures substantive deficiencies in the reporting of randomized trials in the surgical literature. Among other things, the authors of that report indicate that lack of awareness of the CONSORT guidelines2Moher D. Schulz K.F. Altman D. The CONSORT statement revised recommendations for improving the quality of reports of parallel-group randomized trials.JAMA. 2001; 285: 1987-1991Crossref PubMed Scopus (2023) Google Scholar contributes to the deficiencies they observed. We cannot know for sure whether the problems in the surgical literature are worse than in other contexts, but we think they are because of the general underapplication of rigorous experimental design methods for clinical questions in surgery.3Anyanwu A.C. Treasure T. Surgical research revisited clinical trials in the cardiothoracic surgical literature.Eur J Cardiothorac Surg. 2003; 25: 299-303Crossref Scopus (39) Google ScholarSee related articles on pages 229, 233, 241, 243, 245, and 249. See related articles on pages 229, 233, 241, 243, 245, and 249. The Tiruvoipati study1Tiruvoipati R. Balasubramanian S.P. Atturu G. Peek G.J. Elbourne D. Improving the quality of reporting randomized controlled trials in cardiothoracic surgery—the way forward.J Thorac Cardiovasc Surg. 2006; 132: 233-240Abstract Full Text Full Text PDF PubMed Scopus (60) Google Scholar is a review of problems relatively near the end of the evidentiary pipeline. However, I believe it is helpful to reflect on the whole culture and process of therapeutic inference in surgery to understand how the literature might be improved and, in turn, how the literature might improve the science. This means examining investigators’ attitudes and beliefs, the mindset of peer reviewers, the role of surgical journals, and the demands of readers. With regard to attitudes and beliefs, surgeons have regularly been taught some things that dissuade them from relying on experimental methods. These include a great deal of respect for opinion and experience, the anticipation of large treatment effects, the reliability of incremental improvements in a surgical procedure, the difficulty and/or non-necessity of randomization, an underappreciation of selection and observer bias, and confidence in favorable risk/benefit ratios in properly selected patients. Such beliefs support the adequacy of relatively informal methods for evaluating treatments and are descendants of historical authoritarianism in medicine. Other contexts, like therapeutic development of drugs, have appropriately and successfully replaced the authoritarian perspective with an experimental one because the beliefs listed above are not routinely true and because of our frequent need to detect modest-sized but valuable treatment effects. Drug regulation has been instrumental in effecting this change. Drug developers and other nonsurgeons can provide worthwhile alternative views to some of the attitudes and beliefs of surgeons regarding rigorous clinical trial methodology. An example is the value, validity, feasibility, and frequent necessity of randomization as a device to remove bias and increase reliability. Treatment masking also deserves more than a customary dismissal. Also, many surgical treatment effects, when present, are modest in size. Such mitigating, rather than curative, effect sizes require large rigorous trials to provide convincing evidence. Much of what is taught to young investigators is carried over to those who peer review manuscripts, where scientific culture reinforces itself. It is not enough to know that trial reporting guidelines are relevant to a particular study and make them, in part, the currency of a review. What is more important is to see beyond reporting weaknesses and assess the true quality of the trial. Even more useful is the ability to know what research design is appropriate and possible and gauge the strength of evidence on that basis. Unavoidably, therapeutic questions in surgery confound 3 effects: (1) efficacy of the procedure, (2) prognosis through patient selection, and (3) practitioner skill and supportive care. Strength of evidence from a surgical study depends largely on the ability of the research design to separate those effects, especially the first and second. It can be intimidating as a reviewer to imagine rejecting a superficially well-done study because of critical design flaws. Journal editors play a crucial role in quality improvement that goes beyond the grooming of manuscripts. Culling is their most powerful tool, and it has to be applied aggressively if a journal is to improve itself. Multiple journals that improve themselves will begin to elevate the scientific discipline. This gives the editorial process considerable leverage and importance. As a repository of the scientific culture that speaks from both past and present, it is vital to improve journals energetically. Journal editors lack direct influence in many matters (eg, determining which questions are addressed by research studies), but we should not underestimate the breadth of power they wield. The problem of improving clinical trials in surgery comes full circle when considering how the demands of readers affects the science. Readers’ expectations depend on knowledge, attitudes, and beliefs acquired in training and through experience, and they exercise those demands in two ways. First, the effect of a published clinical trial probably depends as much on the receptivity of the audience as on the quality of the science. The “word on the street” or lack of it depends on receptivity, as does the willingness of readers to incorporate published findings into both their clinical and research practices. Second, readers translate their demands into concrete decisions as peer reviewers. Thus the discipline is not necessarily hard wired for progress but instead relies on the influence of teachers, of which journal editors are one example. In this way the article by Tiruvoipati and colleagues1Tiruvoipati R. Balasubramanian S.P. Atturu G. Peek G.J. Elbourne D. Improving the quality of reporting randomized controlled trials in cardiothoracic surgery—the way forward.J Thorac Cardiovasc Surg. 2006; 132: 233-240Abstract Full Text Full Text PDF PubMed Scopus (60) Google Scholar points to a direction for improvement. CONSORT and beyondThe Journal of Thoracic and Cardiovascular SurgeryVol. 132Issue 2PreviewNapoleon’s march on Moscow commenced on June 23, 1812, as the Grande Armée of 691,500 men, the largest army assembled in European history, crossed the Neman River. Attrition during its advance and ignominious winter retreat, to 22,000 men on December 14, 1812, when it recrossed the river, was depicted by Charles Joseph Minard in what many consider the best statistical graphic ever produced (Figure 1). Less spectacular, but terribly informative, was the depiction along similar lines of patients screened for entry into the Coronary Artery Surgery Study (CASS) at 11 randomizing sites, dwindling from 16,626 to 780 randomized patients (Figure 2). Full-Text PDF Improving the quality of reporting randomized controlled trials in cardiothoracic surgery: The way forwardThe Journal of Thoracic and Cardiovascular SurgeryVol. 132Issue 2PreviewTo evaluate the quality of reporting of randomized controlled trials in cardiothoracic surgery, to identify factors associated with good reporting quality, and to assess the awareness of the Consolidated Standards for Reporting of Trials statement and ascertain the views of authors reporting randomized controlled trials on the difficulties in conducting randomized controlled trials and the possible ways to further improve the reporting quality of randomized controlled trials in cardiothoracic surgery. Full-Text PDF Randomized clinical trials in surgery: Why do we need them?The Journal of Thoracic and Cardiovascular SurgeryVol. 132Issue 2PreviewRandomized clinical trials (RCTs) are a fundamental tool for surgical investigators to validate new therapies, but they are still used sparingly, and the results are reported in an inconsistent format. This conclusion is highlighted by the provocative article by Tiruvoipati and colleagues,1 who report in this issue on the use of the CONSORT statement, a checklist and flowchart used in reporting results of RCTs, in the cardiothoracic surgery literature. This important statement is endorsed by many medical journals but not by most surgical journals. Full-Text PDF Randomized controlled trials in surgery: Comic opera no more?The Journal of Thoracic and Cardiovascular SurgeryVol. 132Issue 2PreviewThe ultimate goal of clinical research is to provide the best information to optimally guide the management of a given patient with a particular problem confronting us. Specifically, surgeons might wish to compare medications, surgical techniques, or combinations thereof for a given condition. Randomized controlled trials (RCTs) help to circumvent several major obstacles hindering investigations of this nature. Through randomization, selection bias is theoretically removed, and confounding factors are distributed evenly among groups to mitigate their potential influences. Full-Text PDF Reports of clinical trials: Ethical aspectsThe Journal of Thoracic and Cardiovascular SurgeryVol. 132Issue 2PreviewTiruvoipati and colleagues1 have shown us glaring deficiencies in the reporting of randomized controlled trials (RCTs) in the cardiothoracic surgery (CTS) literature. The CONSORT statement, a checklist and flowchart used in writing reports of RCTs, is a tool that can be used to improve RCT reports; it is endorsed by most major medical journals but by none of the major cardiothoracic journals. Why is this? One reason might be that RCTs are not as applicable to surgical as to medical disciplines. Indeed, the authors observe that RCTs are only half as prevalent in our journals as they are in general medical journals, and therefore CTS journal editors might be less inclined to pay attention to quality of reporting of low-incidence articles. Full-Text PDF Believability of clinical trials: A diagnostic testing perspectiveThe Journal of Thoracic and Cardiovascular SurgeryVol. 132Issue 2PreviewClinical trials are the medical community’s diagnostic tests. When a physician sends a patient for a conventional diagnostic test, he or she is asking whether a certain disease is likely to be present. Similarly, when the medical community designs and executes a clinical trial, it wants to know whether a certain treatment is likely to work. And just like a diagnostic test, a clinical trial has the potential for yielding an incorrect, erroneous result. Full-Text PDF

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call