Abstract

One of the most-downloaded articles in the history of Clinical Orthopaedics and Related Research® is an editorial written by my predecessor here, Richard A. Brand MD, called “Writing for Clinical Orthopaedics and Related Research” [2]. It’s no overstatement to say that his thoughts on the topic of scientific reporting have informed a generation of academic orthopaedic surgeons, because his approach was (and remains) philosophically sound, and his suggestions in that essay are clear and easy to follow. In that piece, he also correctly noted that standards of reporting as well as the ethical norms of our field change over time, and for that reason, this topic should periodically be revisited; in fact, his much-read column on this topic from 2008 was a reboot of an earlier editorial he wrote on the same topic [1]. I’ll take that as tacit permission to revisit this topic again now. My predecessor is a thoughtful philosopher of science, with a background in that discipline to which I cannot even aspire. My goals in this summary therefore are more modest, and they are much more practical than philosophical: to describe an easy-to-follow recipe for reporting clinical and laboratory research—and how to modify that approach where needed for systematic reviews and meta-analyses—that simplifies the task of writing for CORR®. This process covers the crafting of each section of an original scientific report. Because musculoskeletal researchers and scientists write different kinds of papers, I’ll try to point out the distinctions among the main kinds of papers orthopaedic journals see; speaking generally but accurately, these include original clinical research, original nonclinical research (which includes laboratory research, surveys, research about education, and other research not directly involving patients), and research that synthesizes earlier work in a formal way (such as systematic reviews and meta-analyses, which I’ve covered in greater depth elsewhere [3, 4]). Before diving in, here are two comments about the suggestions and one organizing principle: (1) These approaches are likely to serve authors well even if they choose to write for journals other than CORR, and (2) They are the infrastructure of free-to-use online article-building tools for authors that we’ve coded (available at www.clinorthop.org) (Fig. 1), which walk the author through the creation of an article and deliver a formatted scientific manuscript that follows these principles, ready for submission to CORR or any other journal. Finally, the principle that undergirds all my recommendations is that good science is organized around clear, answerable research questions (Appendix 1; https://links.lww.com/CORR/B35). Good questions orient every part of a well-presented paper: The Introduction helps readers to see why the questions are worth caring about (and it ends by stating them explicitly); the Methods section tells readers how the questions were answered, and convinces them that the approaches are trustworthy; the Results section answers the questions directly, sequentially, and clearly; and the Discussion section puts the answers to those questions in the context of earlier work, aids the reader in interpreting those answers in light of salient limitations, and uses the answers to support specific, real-world, practical recommendations.Fig. 1: This figure depicts a screenshot taken from the home page of Clinical Orthopaedics and Related Research® (www.clinorthop.org), with arrows pointing to important author resources. Red arrows point to links to freely available article-building tools (“apps” for clinical research [which also work for laboratory and nonclinical research] and systematic reviews/meta-analyses); blue arrows point to downloadable instructions for using those tools (though the tools themselves are generally self-explanatory); and the green arrow points to a downloadable “quick-start guide” for CORR authors that offers other helpful tools. STROBE, CONSORT, and PRISMA checklists are available within each of those online tools (“apps”).Writing the Introduction A good Introduction needs to accomplish only three things: to convince readers that the topic justifies their interest (background), to assure readers that the paper itself will fill important knowledge gaps or help settle key controversies (rationale), and to state clear, answerable research questions. As such, it need not be long. The editors here have found that three paragraphs of modest length usually will do the trick. An effective background paragraph explains why the topic of study (not the paper itself, but the broad topic into which the paper fits) is important. Is it common? Morbid? Expensive? Obviously, many other criteria might justify a topic as being worth readers’ attention; one way or the other, though, it’s a sales pitch—the reader here is saying “convince me”—and it’s the authors’ job to make that pitch. This paragraph differs little regardless of whether the study is original clinical research, nonclinical research, or research that synthesizes; in all of those study designs, one needs to persuade the reader that the study’s theme is worth caring about. A good rationale paragraph identifies gaps in knowledge the paper will fill or controversies it will help settle. For that reason, each research question the authors plan to ask later on needs a bit of rationale here. A compelling rationale hooks the reader (and the reviewer, and the editor), so this is no place to cut corners: be persuasive. Don’t assume readers (or reviewers or editors) will “get it” without your help. Usually they (we) won’t. Occasionally, a second paragraph of rationale is called for; this may be the case if the study uses an unusual or unfamiliar methodological approach to answer a question and you need to convince readers it’s the right tool for the job. The rationale section is similar in clinical and laboratory research, but it differs in systematic reviews and other study designs that synthesize findings from earlier papers (including meta-analyses and decision analyses). In those kinds of research, since authors aren’t answering “new” questions directly but rather aggregating prior evidence to do so, the rationale paragraph should explain what would be gained by aggregating or pooling earlier work. Typically, these papers are most helpful (and so the rationale is most compelling) when some studies support a concept and others oppose it, such that synthesizing disparate sources is likely to be illuminating. This laborious exercise is only worthwhile when no prior papers have done it, or when new studies have been published since the last systematic review was published, and so it’s important to include one of those two claims in the rationale paragraph of a study that aggregates the work of others. In all instances, a well-crafted Introduction ends with the most important part of the paper: a list of specific, answerable research questions. If the rationale section is well written, this last paragraph can be short; often, something like “We therefore asked, (1) [then list the questions]…” works well and is all that’s needed. This recommendation for clear, answerable questions applies equally regardless of whether one is writing original research or a systematic review. Writing the Methods The longest section in most papers will get the shortest shrift here; the reason for this is that there are so many different methodological approaches that it’s impossible to summarize them all. Instead, I’ll offer a few broadly applicable suggestions and point to some helpful tools. The main goals of the Methods section—and this applies equally to studies of every design—are to ensure the reader knows how the study questions were answered, and if there are soft spots to the approaches used, to justify them (that is, explain why they’re not disqualifying so the reader stays with you). The specifics of how to accomplish this differs with each study’s design. Fortunately easy-to-follow, freely available checklists exist for studies of every design [5]. Doing a retrospective study about a treatment? Use STROBE (Strengthening the Reporting of Observational Studies in Epidemiology). Evaluating a new diagnostic test? Pull up STARD (Standards for the Reporting of Diagnostic Accuracy Studies). Writing up a randomized trial? Reach for CONSORT (Consolidated Standards of Reporting Trials). Systematic reviews and meta-analyses should consult PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Eyes crossing from all this alphabet soup? Mine, too. No need to remember the acronyms. A freely available compendium of reporting guidelines for studies of nearly every conceivable design (there are hundreds of them) is available at www.equator-network.org [5]. CORR requires the use of these checklists for observational (retrospective) research, randomized trials (which also need to be prospectively registered if they’re going to be submitted here or to the Journal of Bone and Joint Surgery or The Bone and Joint Journal [6]), and meta-analyses or systematic reviews [5], and we recommend their use for studies of several other kinds (Table 1). Take advantage of those simple checklists and/or CORR’s online article-building tool (at www.clinorthop.org, which also includes those checklists) (Fig. 1) and you can’t go too far astray. Table 1. - Study types, names of guidelines, and whether they are required or recommendeda when submitting research to Clinical Orthopaedics and Related Research Type of study Name of guideline Required or recommended? Animal research ARRIVE b Recommended [3] Diagnostic accuracy studies STARD 2015 Required (STARD is a close analogue to STROBE, but is more useful in the specific setting of evaluating a diagnostic test; CORR would accept either one, but STARD is preferred) [2] Gene expression analyses MIAME, MINSEQE, or others Recommended [1] Health economic evaluation CHEERS Recommended [4] Machine-learning and prediction models using related approaches TRIPOD, though STARD sometimes also is useful Recommended [4] Observational research (retrospective or prospective) c STROBE Required [2] Randomized trials CONSORT d Required [2] Studies based on surveys CHERRIES or ACCADEMY Recommended [5] Systematic reviews and meta-analyses PRISMA d Required [1] Qualitative research studies COREQ; SRQR also is acceptable Recommended [4] ARRIVE = Animal Research: Reporting of In Vivo Experiments; STARD 2015 = STandards for Reporting Diagnostic Accuracy; MIAME = Minimal Information About a Microarray Experiment; MINSEQE = MINimal information about a high throughput SEQ Experiment; CHEERS = Consolidated Health Economic Evaluation Reporting Standards; TRIPOD = Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis; STROBE = Strengthening the Reporting of Observational Studies in Epidemiology; CONSORT = Consolidated Standards of Reporting Trials; CHERRIES = CHecklist for Reporting Results of Internet E-Surveys; ACCADEMY = Academy of Critical Care: Development, Evaluation and Methodology; PRISMA = Preferred Reporting Items for Systematic Reviews and Meta-Analyses; COREQ = Consolidated Criteria for Reporting Qualitative Research; SRQR = Standards for Reporting Qualitative Research: a synthesis of recommendations.aMore than one guideline may apply to an individual study; in general, CORR is happy to make a determination on the best one(s) to use in consultation with authors, if questions arise.bCORR versions of STROBE, CONSORT, PRISMA, and ARRIVE can be found on the CORR website on the Author Guidelines page under Methodology Checklists Followed by CORR (https://journals.lww.com/clinorthop/Pages/author-guidelines.aspx).cIncludes studies in which patients received routine care, and may have had interventions such as surgery, but did not receive an experimental intervention. Retrospective studies about surgical interventions, with or without historical control or comparator groups—so common in orthopaedic journals—generally would be included in this category.dIn addition, many guidelines have extensions for specific designs (such as CONSORT-NPT for nonpharmacological treatments, PRISMA-NMA for network meta-analysis/multiple treatment comparisons, and PRISMA-DTA for diagnostic accuracy), as well as for specific aspects of a manuscript (such as CONSORT for abstracts, PRISMA for abstracts, among others). CORR recommends consulting the EQUATOR Network (www.equator-network.org) before beginning a research study. (Modified from Leopold S, Porcher R. Editorial: What readers and clinician scientists need to know about the “other” EQUATOR. Clin Orthop Relat Res. 2021;479:643-647.) Another way to keep readers oriented in the Methods section is to focus on—and perhaps begin important paragraphs—by mentioning the study endpoint of interest (or the question you’re answering) at the top or in subheadings, and then explain the tool used to answer it. It’s easier on a reader if these paragraphs begin something like “To determine whether three injections with pain-a-way serum reduced postoperative discomfort, we surveyed patients using …” (and then summarize the outcomes tools you used) than it is to begin with “We administered the Michigan Hand Outcomes Questionnaire to all patients …” and expect the reader to know what the MHOQ measures, or why you used it. Finally, I’ll note that journals differ slightly about whether descriptive data like the demographics of a study’s population should go in Methods or Results. At CORR, we’ve heard from readers that by the time they get to a paper’s Results section, they just want the “answers” to the research questions, and nothing else. For that reason, we limit Results sections to the answers to a paper’s specific, testable questions, and we consider other background material—which we see as the material that authors used to answer those questions—to be Methods. I agree with our readers: This makes it easier to focus on and retain a study’s key findings. Writing the Results This section seems to me the most straightforward one, and its presentation is pretty consistent across research manuscripts of all designs, but it’s the one I find that authors struggle with the most. Since the goal of scientific reporting is to help the reader retain the paper’s main messages, a great way to do that is to keep the Results section tightly parallel to the research questions, and to think about how the reader is likely to use the material. Doing this is simple, just: Write one (and only one) paragraph or subsection of Results answering each research question, and put those Results paragraphs or sections in the same order as the questions asked. Three research questions up in the Introduction means three paragraphs or results subsections down here in Results, in the same sequence. The subheads should reflect the research question or the finding (like Pain) rather than the outcomes or measurement tools used (like MHOQ and VAS Scores). Begin each section of Results with a sentence that answers the question in plain language, with a minimum of jargon. If one just reads the opening sentences of each Results paragraph in sequence, one should get a good sense for the key findings of the paper. Present the effect’s size and direction clearly, and avoid needless jargon like the names of statistical tests or language of “significance”; if something is significantly larger than something else, then say it’s larger. If it is “larger” but not “significantly so,” just say that it is no different, or no different with the numbers available. Try “Patients treated with nobleedum spray lost less blood during surgery than those treated with placebo” rather than “Our multivariable analysis identified a significant effect associated with nobleedum spray.” Present every main finding as an effect size (commonly an odds or hazard ratio, a mean difference or difference of medians, a correlation coefficient, or a point estimate from a survivorship curve), a 95% confidence interval around that effect size estimate, and a p value or some other metric of the strength of the statistical inference. Share those effect sizes as a patient or a clinician might want you to; patients and their doctors can’t perceive p values. Instead, report results in the context of metrics like the minimum clinically important difference (MCID), substantial clinical benefit, patient acceptable symptom state, or another measure of effect size that will help readers to know whether the treatment in question was “worth it” to the patients who are likely to be on the sharp end of the needle or knife. If there are no MCIDs, then at least discuss how large you believe the effect would need to be to justify the risk, pain, or cost. Readers are free to agree or disagree. At the end of it all, the reader wants to know if any differences you found are not just “statistically significant” but big enough to care about. Use tables and figures to make the paper’s main messages memorable; the goal of scientific reporting is maximal clarity (at least in the body of a paper), not maximal completeness. Ask yourself what main messages you wish the reader to retain. Then, ask yourself how this would best be done: A graph? A medical illustration? A simple table that draws the reader’s attention to a key finding? If you feel that an eye-crossing table of data that goes on for pages is somehow essential, then create an appendix or online-only supplemental materials. Writing the Discussion The goals of this section also stay more or less consistent across papers of all designs: convince the reader to stay with you by engaging and orienting them (opening paragraph), help the reader to interpret your key findings in light of the study’s limitations, discuss the main findings of the paper in the same order they were presented in the Results, and send readers off with a few thoughtful, real-world-practical things that they should do differently based on your discoveries. The Discussion’s opening need only be a brief paragraph; reprise the study’s background and rationale in a sentence or two each, present the key discoveries qualitatively, and summarize what you think the reader ought to do differently based on those discoveries to take better care of patients, run a more efficient practice, or make more-sensible healthcare policy. Next, you need to discuss the limitations of the paper. The limitations section must go beyond a mere list of the study’s limitations with some vague hand-waving (“This study has limitations similar to that of all retrospective research…”). Instead, justify each limitation and explain how the reader should interpret your main findings in light of each limitation. For example, most retrospective studies of treatments are affected by selection bias (baseline differences between patients treated one way and those treated another way that might affect the outcomes of interest), transfer bias (follow-up insufficiently long or complete to detect all relevant harms of treatment), and assessment bias (self-interested, unvalidated, or insufficiently robust means of evaluating those treated); these often are present to greater or lesser degrees, and all tend to make a new treatment look better than it really is. It’s important to discuss these things in specific terms so that the reader doesn’t overestimate the benefits of the interventions in question. For example, if there is differential loss to follow-up between two study groups, it’s worth letting the reader know that the group with a greater percentage of patients who are lost to follow-up may appear to be doing better than it is (since the missing generally are not doing as well as the accounted for, in clinical research). Whatever limitations may affect how a paper should be interpreted, this is the place to share them frankly, and to explain their likely effects on the study’s main findings. Modesty is an undervalued virtue, but one that is much appreciated by reviewers, editors, and readers. A memorable Discussion body provides key context on the study’s main discoveries (the answers to the research questions) in the same order those discoveries were shared in the Results section. Keeping the sequence consistent is an aide-memoire to readers, since we learn by repetition: Asking, answering, and discussing questions (and their answers) in sequence tends to make those answers stickier. A paragraph of Discussion body per research question usually is about right. Remember, the goal here is insight and clarity, not a book-chapter level of comprehensiveness. There is room for variation in the Discussion body section, as that paragraph-per-question approach doesn’t always work perfectly (for example, sometimes it’s helpful to lump the Discussion of two or more questions into one paragraph, whereas other times splitting key findings is more effective), but remember that scientific reporting is not a means of self-expression for authors, it’s a tool for dissemination meant to serve readers. Less almost always is more. Regardless of structure, some key thematic through lines belong here: Each paragraph should start with a powerful topic sentence, ideally related to a key discovery (when you’re done, just read the first sentences of each paragraph of this section; the result should be a comprehensible big-picture summary of your paper’s take-home messages). In each paragraph, authors should make clear how the reader can convert a discovery shared in the Results section into one or more actions in service of patients, practices, or policy changes, or authors should explain how the findings should change how scientists study something next time around. There should be some compare-and-contrast to the relevant work of others; if the work is generally confirmatory, authors should explain how the paper extends what is known in meaningful ways. If the paper’s findings contradict those of prior research, help the reader know how to interpret the differences, and how to apply the new findings in practice. Finally, if there are important gaps in knowledge that remain—and there almost always are—a good Discussion body can explain how future studies should fill those gaps. The final paragraph of a research paper should summarize the paper’s main messages, its specific suggestions, and directions for future research that leverage the discoveries it has reported. In principle, the suggestions for practice and future research made here should not have been possible to make before the study was done; if one can imagine making those suggestions in the absence of the new discoveries, the reader is left to believe that the paper did not move the needle, and the editor is left to wonder “why publish it”? Writing the Abstract Though it appears first, it’s often easier to write last. Nothing should appear in the abstract that doesn’t appear in the main body of the text, and it’s perfectly permissible (encouraged, in fact) to lift or adapt sentences from the main text to use in the Abstract if they’re well-crafted and effective. Since the Abstract may be all that your readers have access to, you’ll want to make it compelling and digestible. With that in mind, a parallel structure—clear questions up top and a results section that answers those questions (and only those questions) in order, with effect sizes and directions clearly conveyed—is especially helpful. And don’t forget to make good use of the Conclusion section of the Abstract. Rather than restating results, which you will have presented clearly and which your readers saw mere seconds before, assume readers read and understood your findings, and use this precious bit of real estate to answer two important questions: (1) In light of your discoveries, what, specifically should surgeons do differently to take better care of patients? And, (2) what unanswered questions remain, and how might future studies answer them in a way that builds on your discoveries? Figures and Tables Follow the journal’s directions. Last Thoughts Missing in all this is any mention of word counts. This is by design. A paper that is structured linearly according to the suggestions I’ve made here should be exactly the right length for the task. If somehow it isn’t, well, that’s what editors are for. Your work is too important to risk being misinterpreted. Presenting it clearly is the best way to help readers understand, retain, and use your discoveries in practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call