Abstract

Since January 2003, all clinical scientific articles published in the American volume of The Journal of Bone and Joint Surgery (JBJS-A) have included a level-of-evidence rating. The aim of the current study was to evaluate the interobserver agreement among reviewers, with varying levels of epidemiology training, in categorizing the levels of evidence of these clinical studies. Fifty-one consecutive clinical papers published in the American volume of JBJS were identified by a computerized search of the table of contents from January 2003 through June 2003. Each paper was blinded so that only the title, abstract (without the level of evidence designated), and methods section were provided to the reviewers. The papers were coded and were randomly organized in a binder. Six surgeons graded each blinded paper for (1) the type of study (therapeutic, prognostic, diagnostic test, or economic or decision analysis), (2) the level of evidence (on a scale of I through V), and (3) the subcategory within the particular level of evidence. Three surgeons were members of JBJS American Editorial Board, two surgeons were reviewers for JBJS-A, and one surgeon was an active researcher not formally associated with JBJS-A. The reviewers did not receive any formal training in the application of the classification system, but each was provided with a detailed description of the classification system used by JBJS-A. Intraclass correlation coefficients with 95% confidence intervals were determined for the reviewers' agreement regarding the type of study, level of evidence, and subcategory within the level of evidence. The majority (69%) of the fifty-one included articles were studies of therapy, and 57% of the studies constituted Level-IV evidence. The intraclass correlation coefficients for the agreement among all reviewers with regard to the study type, level of evidence, and subcategory within the level of evidence ranged from 0.61 to 0.75. Reviewers trained in epidemiology demonstrated greater agreement (range in intraclass correlation coefficients, 0.99 to 1.0), across all aspects of the classification system, than did reviewers who were not trained in epidemiology (range in intraclass correlation coefficients, 0.60 to 0.75). These findings suggest that epidemiology and non-epidemiology-trained reviewers can apply the levels-of-evidence guide to published studies with acceptable interobserver agreement. The validity of this system remains a question for future research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.