Abstract

only. This is a very dangerous practice since abstracts are often not precise accounts of the real contents of a paper. The other situation where Rule no. 1 is violated is when primary references are quoted based on a secondary reference such as another research paper or a review. Again, this is a dangerous practice as documented by Evans et al. [5] in a review of the accuracy of quoting 137 randomly chosen references from The American Journal of Surgery; Surgery Gynecology and Obstetrics; and Surgery. A major error of quotation was assigned if the referenced article failed to substantiate, was unrelated to, or even contradicted the assertion made by the quoting authors. A depressing 27% of the quotations (37 of 137) were classified as major errors. It is probably unrealistic to expect even the keenest reader to check all primary references of a paper. However, in case of key findings quoted in a paper, a closer look at the primary reference is a must for the critical reader. Rule no. 2 is equally important: you should never rely on the authors’ conclusion but try to reach your own based on the data presented in a paper. In the standard lay-out of a scientific paper, the Introduction, Discussion and Conclusion sections are mainly window dressing. Not to say that they cannot be informative if they are well written, but the substance of a paper is in the Methods and Materials and in the Results sections. A rough guide to what to look for in a report of a randomised controlled trial (RCT) is found in Table 1. This is not meant as a checklist that can be ticked while reading the paper as there is no simple yes/no answer to most of these. It is more like a shopping list of ingredients that the critical reader will need in order to assess the strength of evidence presented in a paper. Some of the items in this table will be discussed later. Grading the strength of evidence Clearly, it would be extremely helpful to have a validated instrument for assessing the strength of evidence produced by a given study and this has been the subject of considerable research efforts. A recent systematic review prepared by West and colleagues for the U.S. Department of Health and Human Services found a total of 1602 publications relating to assessment of study quality or strength of evidence [6]. A total of 121 systems were reviewed and these included scales, checklists or other instruments or guidelines for assessment of quality or evidence strength. Twenty of these systems were developed for evaluation of systematic reviews and 49 systems for RCTs. In addition, the report reviewed 40 systems for grading the strength of a body of evidence. Unfortunately, there is no consensus as to the most adequate quality assessment instrument and although there is a considerable overlap between items included in the various scales or checklists, there are also important differences. This was convincingly shown by Juni et al. [7] who applied 25 different scales to assess the quality of 17 trials comparing low molecular weight heparin (LMWH) with standard heparin in the prevention of post-operative thrombosis. For each scale, the possible relative advantage of LMWH was estimated separately in two strata: high-quality and low-quality trials. For six of the scales, high-quality studies showed no significant difference between the two types of heparin whereas low-quality studies showed a significant benefit from LMWH. For seven other scales, the result was the opposite: high-quality studies showed a benefit from LMWH whereas low-quality studies did not. For the remaining 12 scales, the effect estimates were similar in the two quality strata. One of the most frequently cited and used of these quality assessment scales is the simple scale proposed by Jadad et al. [8] which assesses three elements of study design: whether the study is appropriately randomised, whether it is appropriately blinded and whether withdrawals and dropouts are described. The resulting score ranges from 0 to 5 (see Table 2). The scale was developed and applied in the evaluation of the quality of 36 reports on RCTs in pain research. The authors used a panel of assessors and showed that this scale produced consistent results and that the effect estimate from double-blind trials was significantly lower that the estimate from non-blinded trials. Subsequent authors have applied the Jadad scale in diverse fields of medicine. There are, however, two obvious but important caveats here. First

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.