Abstract

Before the pandemic, when orthopaedic surgeons would hear the word equator, we think most would have had visions of an ecotourism jaunt to the Galapagos, or—for the more civic-minded among us—perhaps a surgical mission trip someplace in central Africa. But there is another EQUATOR we all need to know about, regardless of whether we read, produce, or edit musculoskeletal research. The EQUATOR network (Enhancing the QUAlity and Transparency Of health Research; we don’t like these forced acronyms, either) is an international umbrella organization that collaborates to improve the reporting and conduct of healthcare-related research. While the group has missions encompassing many sides of research and its facilitation around the world [8], it is best known for disseminating a large number of robust guidelines that improve the reporting of health-science manuscripts. The hundreds of guidelines that are available at no charge at www.equator-network.org are not just useful for authors, they also can help each of us get more out of what we read. The premises supporting those tools are based on sensible critical-appraisal principles, and so the themes raised in those guidelines appear in tools for readers like the JAMA User’s Guides to the Medical Literature [10, 15] as well as the online reviewer tools that CORR® developed and makes freely available in the “Links to Author and Reviewer Tools” section of www.clinorthop.org [12]. If You’re an Author If you have written a clinical research paper—prospective or retrospective, randomized or not—or a systematic review, you’ve probably crossed paths with one or more of these acronyms: STROBE (Strengthening the Reporting of Observational Studies in Epidemiology), CONSORT (Consolidated Standards of Reporting Trials), and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Our purpose is not to cross your eyes with so much alphabet soup, but rather to make the point that if you’ve used those tools as an author, you’ve probably found them helpful. What you might not know is that those three commonly used guidelines are only three of the 400+ tools available for download from the EQUATOR network’s home page. This large and growing number of guideline documents is a function of the increasing complexity of medical research, and of the fact that new research approaches are constantly being developed. While a retrospective study about a novel diagnostic test might be improved by following STROBE, it would be better served, for example, to follow STARD (Standards for Reporting of Diagnostic Accuracy). In just the last few years at CORR, we’ve suggested ways for authors to improve their studies about (and for readers to get more out of) machine learning and qualitative research [14], animal research [13], and survey studies [5]. As you might expect, each of those has one or more relevant guidelines available through EQUATOR. Best of all, you don’t have to remember that CHERRIES is the guideline for survey studies and CHEERS is the one for economic analyses; it’s possible to search the EQUATOR network’s website simply by describing your study’s design. We recommend doing just that before you begin, as even experienced investigators can forget or overlook important study elements, and many journals require use of the guidelines’ checklists. At CORR, we require them for some, but not all study designs (Table 1); having said that, we’re excited when authors choose to make use of them for studies of all kinds. Table 1. - Study types, names of guidelines, and whether they are required or recommended* when submitting research to Clinical Orthopaedics and Related Research Type of Study Name of Guideline Required or Recommended? Animal research ARRIVE** Recommended [13] Diagnostic accuracy studies STARD 2015 Required (STARD is a close analogue to STROBE, but is more useful in the specific setting of evaluating a diagnostic test; CORR would accept either one, but STARD is preferred) [2] Gene expression analyses MIAME, MINSEQE, or others Recommended [11] Health economic evaluation CHEERS Recommended [4] Machine learning and prediction models using related approaches TRIPOD, though STARD sometimes also is useful Recommended [14] Observational research (retrospective or prospective)*** STROBE Required [2] Randomized trials CONSORT**** Required [2] Studies based on surveys CHERRIES or ACCADEMY Recommended [5] Systematic reviews and meta-analyses PRISMA**** Required [1] Qualitative research studies COREQ; SRQR also is acceptable Recommended [14] ARRIVE = Animal Research: Reporting of In Vivo Experiments; STARD 2015 = STandards for Reporting Diagnostic Accuracy; MIAME = Minimal Information About a Microarray Experiment; MINSEQE = MINimal information about a high throughput SEQ Experiment; CHEERS = Consolidated Health Economic Evaluation Reporting Standards; TRIPOD = Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis; STROBE = Strengthening the Reporting of Observational Studies in Epidemiology; CONSORT = Consolidated Standards of Reporting Trials; CHERRIES = CHecklist for Reporting Results of Internet E-Surveys; ACCADEMY = Academy of Critical Care: Development, Evaluation and Methodology; PRISMA = Preferred Reporting Items for Systematic Reviews and Meta-Analyses; COREQ = Consolidated Criteria for Reporting Qualitative Research; SRQR = Standards for Reporting Qualitative Research: a synthesis of recommendations.*More than one guideline may apply to an individual study; in general, CORR is happy to make a determination on the best one(s) to use in consultation with authors, if questions arise.**CORR versions of STROBE, CONSORT, PRISMA, and ARRIVE can be found on the CORR website on the Author Guidelines page under Methodology Checklists Followed by CORR (https://journals.lww.com/clinorthop/Pages/author-guidelines.aspx).***Includes studies in which patients received routine care, and may have had interventions such as surgery, but did not receive an experimental intervention. Retrospective studies about surgical interventions, with or without historical control or comparator groups—so common in orthopaedic journals—generally would be included in this category.****In addition, many guidelines have extensions for specific designs (such as CONSORT-NPT for nonpharmacological treatments, PRISMA-NMA for network meta-analysis/multiple treatment comparisons, and PRISMA-DTA for diagnostic accuracy), as well as for specific aspects of a manuscript (such as CONSORT for abstracts, PRISMA for abstracts, among others). CORR recommends consulting the EQUATOR Network (www.equator-network.org) prior to beginning a research study. But guidelines are more than checklists. Our experience—which involves evaluating thousands of papers a year here, year after year—suggests that research teams that use reporting guidelines conduct better studies. Since reporting guidelines rely on consensual scientific principles and evidence-based research, consulting them before beginning a project will tend to cause clinician scientists to consider why an element of a checklist is included; doing so inevitably results in discussion and consideration of alternative approaches and results in methodological choices that improve study design, conduct, and analysis. For example, the TRIPOD guideline (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis)—mentioned in our recent editorial about machine learning and artificial intelligence [14] and available on the EQUATOR website—differentiates between different types of prediction models depending on whether or how a model has been validated (no validation, internal validation by resampling or data-splitting, external validation). This simple distinction, which was not so clearly formalized before, may nudge groups doing this type of work toward best practices by raising awareness of the weaknesses inherent in commonly used research strategies. Rather than reproducing what has been done in earlier studies, and in so doing, perpetuating shortcomings in our evidence base, using guidelines in advance of a study can help researchers to see the added value of some methodological step; in this case, externally validating a model. For the researcher who is new to the topic of guidelines, we especially recommend EQUATOR’s “Toolkits” page [7], which can help new users get started, choose the right reporting guideline, and see examples of well-done scientific reporting. If You’re a Reader For studies of familiar designs—studies about treatments, diagnostic tests, the natural history of disease, and systematic reviews or meta-analyses—CORR offers a tool for peer reviewers that works equally well to help readers probe for the soft spots in studies, so they will not be misled by unreliable claims. This tool is free for anyone to use, and can be found in the “Links to Author and Reviewer Tools” section of www.clinorthop.org (Fig. 1), and it does not require that the article being analyzed come from CORR; in fact, many users share it with their residents to help them prepare for journal clubs and for reading prior to going to the operating room.Fig. 1.: CORR’s homepage with a red arrow pointing to our online tool for peer reviewers; it is equally useful for thoughtful readers as they evaluate published papers. This tool is free to use at www.clinorthop.org, and it does not require that the article being analyzed come from CORR; in fact, many users share it with their residents to help them prepare for journal clubs and for reading prior to going to the operating room.For less-common study designs, though, readers who want to go deep can do so by searching for the most applicable set of guidelines on www.equator-network.org. If many of the elements listed in the reporting guideline you download are missing from the study you’re reading, that’s a good sign that the study may not have been well conducted, and that the key findings may not be exactly as they appear. On EQUATOR’s “Toolkits” page, they offer a set of resources for peer reviewers [6], which are equally helpful for people who just want to become more thoughtful readers. If You’re a Journal Editor Perhaps the most difficult balance editors try to strike is that between overly proscriptive and overly permissive. Should we require all studies using surveys to follow the CHERRIES guideline [9]? Must all studies based on animal models report according to the ARRIVE guideline [3, 13]? Do meta-analyses have to follow PRISMA [1]? (CORR's standards for those three guidelines are recommend, require, and require, respectively). We certainly can’t make general recommendations for other journals. Even proposing a move from “recommend” to “require” about some particular guideline here at CORR is a surefire way to generate a vigorous debate among our Senior Editors. What we can say with complete confidence, though, is that when an editor is evaluating a study of an unfamiliar design—and even for familiar designs, if the editor in question has not internalized the relevant guidelines on those types of research—the EQUATOR network is a good place to begin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call