Abstract

HomeStrokeVol. 53, No. 2Hello Authors! We Are the Technical Reviewers and Are Here to Help You! Free AccessEditorialPDF/EPUBAboutView PDFView EPUBSections ToolsAdd to favoritesDownload citationsTrack citationsPermissions ShareShare onFacebookTwitterLinked InMendeleyRedditDiggEmail Jump toFree AccessEditorialPDF/EPUBHello Authors! We Are the Technical Reviewers and Are Here to Help You! Mathew J. Reeves, PhD, Seana L. Gall, PhD and Ami P. Raval, PhD Mathew J. ReevesMathew J. Reeves Correspondence to: Mathew J. Reeves, PhD, Department of Epidemiology and Biostatistics, Michigan State University, B601 W Fee Hall, East Lansing, MI 48824. Email E-mail Address: [email protected] https://orcid.org/0000-0002-8019-6343 Department of Epidemiology and Biostatistics, College of Human Medicine, Michigan State University, East Lansing (M.J.R.). Search for more papers by this author , Seana L. GallSeana L. Gall https://orcid.org/0000-0002-5138-2526 Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia (S.L.G.). Search for more papers by this author and Ami P. RavalAmi P. Raval https://orcid.org/0000-0003-1757-9947 Peritz Scheinberg Cerebral Vascular Disease Research Laboratory, Department of Neurology, Leonard M. Miller School of Medicine, University of Miami, FL (A.P.R.). Search for more papers by this author Originally published29 Dec 2021https://doi.org/10.1161/STROKEAHA.121.035647Stroke. 2022;53:307–310Other version(s) of this articleYou are viewing the most recent version of this article. Previous versions: December 29, 2021: Ahead of Print It has been recognized for some time that science, and medical research in particular, has a serious problem with the reliability of its findings.1 The poor replicability and low scientific quality of peer-reviewed biomedical literature has been addressed in several journals and editorials over recent years,2–5 and this problem even extends to replicating the statistical results of clinical trials6—the highest level of clinical evidence. The need for researchers across the spectrum of basic, clinical, and population-based research to be able to replicate methods is paramount to the advancement of science and translation of findings into practice. As the last stop before research findings are widely shared, journals serve a critical role by ensuring that authors report their methods and findings in a rigorous manner that promotes replicability. The editors at Stroke and other journals in the American Heart Association (AHA) portfolio have introduced a variety of steps during manuscript preparation, submission, and review to achieve this goal, including the introduction of technical reviewers, reporting guidelines, checklists, and other detailed author instructions. In this editorial, we provide context to these decisions, which are designed to improve data transparency and openness.7There are many causes contributing to the problem of poor reliability, some of which are directly controllable and so should be the focus of our collective attention. Avoidable problems relevant to the planning and conduct of individual research studies include poor study design, unrepresentative (ie, highly selective) study samples, uncontrolled confounding and measurement bias, insufficient sample size, and statistical errors. Once a study is completed, the next challenge concerns the write-up of the study’s findings. Problems of poor quality research reports, in particular, incomplete and inaccurate descriptions of study methods, are well recognized8 and are believed to be a major contributor to the replication problem.3,4 A full description of study methods should, at a minimum, address how members of the study sample were identified and selected, provide accurate and complete definitions of exposure and outcomes variables, describe data collection methods, and explain and justify the statistical approach and methods.Poor quality reporting creates challenges when assessing a study’s internal validity (ie, assessment of bias) and external validity (ie, generalizability).8,9 The problem of incomplete reporting come to the fore when conducting systematic reviews and meta-analyses. Poorly reported studies are more difficult for bibliographic databases (eg, PubMed and EMBASE) to accurately index, which then limits the ability of researchers and librarians to identify and retrieve relevant articles. Once an article has been identified, then poor reporting can lead to uncertainties in understanding how it was planned and conducted, leading to difficulties when deciding whether to include a particular study or how to abstract its data. Thus, promoting improvements in the quality of reporting means that a given paper will be more likely to be included in future systematic reviews which extends the reach and impact of the original research.Reporting Guidelines as a Solution to Issues of Transparency and Openness in ResearchOne fix for the reporting problems, which has been shown to work,10–13 is the use of reporting guidelines.14 Reporting guidelines are a structured text that guides authors to report the necessary information to describe how a particular study was conducted and what it found. The structured format provides a minimum list of information needed to ensure that the manuscript will be understood by a reader, replicated by another researcher, used by a clinician, and included in relevant systematic reviews. The home of all reporting guidelines is the EQUATOR Network (Enhancing the Quality and Transparency of Health Research), an international initiative started in 2008, with the mission of improving the reliability and value of published health research by promoting transparency and accuracy through promulgation of reporting guidelines.14 The major reporting guidelines found on EQUATOR are organized around a specific type of study design, for example, CONSORT (randomized controlled trial [RCT]), Strengthening the Reporting of Observational Studies in Epidemiology (STROBE; observational designs), PRISMA ([Preferred Reporting Items for Systematic Reviews and Meta-Analyses] meta-analysis), STARD ([Standards for Reporting Diagnostic Accuracy] diagnostic studies) TRIPOD ([Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis] prognostic studies), and are usually accompanied by a checklist of specific items.14 Links to the major reporting guidelines can also be found on the American Heart Association (AHA) Journal Policies home page.15Readers should be aware that there are numerous modified versions (referred to as extensions) particularly to the CONSORT, STROBE, and PRISMA guidelines, which may be more relevant to a particular study subtype. For example, there are currently 14 different extensions to the PRISMA guideline listed on EQUATOR that address different types of systematic reviews (eg, diagnostic tests, individual participant data, scoping, network).14Reporting guidelines are usually developed using exacting and explicit methodology involving a team of experts familiar with the particular study design, topic, or method. The extensive efforts that go into making a guideline are best appreciated by examining the elaboration and explanation document (see examples16–18) that are often published when a major reporting guideline is released or updated. These documents list each checklist item in the guideline and explain in detail how and why each specific item is important. The mission of EQUATOR to serve as a venue to promote and disseminate reporting guidelines has become fertile terrain for the academic entrepreneur; to wit, there are now almost 500 different reporting guidelines that can be found (via its search function) on the equator-network.org site.14 This extensive list comprises guidance documents and checklists that are tailored to specific uses including research methods (eg, machine learning, literature searches, qualitative research), medical specialties (eg, surgery, cancer, gastroenterology), and applications (eg, genetics, health informatics, functional magnetic resonance imaging). Given the enormous number of guidelines available, researchers can and should be able to find at least one guideline that matches their study design, method, or application.Evidence of Benefit of Reporting Guidelines and Technical Reviewers From Preclinical StudiesA particular noteworthy example of the poor replication and translation of biomedical research is in the area of preclinical basic science research. Detailed examination of this translation gap highlighted common deficiencies in the conduct, reporting, and reproducibility of preclinical studies,19,20 which led in 2010 to the introduction of the ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments). ARRIVE includes a checklist of 20 items describing the suggested minimum information that should be provided by publications reporting on animal-based research.21 Specific to stroke research, numerous neuroprotective therapies identified as effective in animal models have failed to translate into effective clinical treatments in human populations.22,23 The translation problem led the Stroke journal in 2011 to introduce its own Basic Science Checklist as part of its manuscript submission process for preclinical studies. An evaluation of the quality of preclinical reports published in the Stroke journal between 2010 and 2013 found meaningful improvements following the introduction of this checklist in some, although not all, key methodological details.24 These ongoing deficiencies in reporting prompted the Stroke journal to recommend the mandatory reporting of an expanded preclinical studies checklist. This expanded checklist, introduced in 2016, included added emphasis on use of flowcharts, experimental design, inclusion and exclusion criteria, details of randomization and blinding, sample size and power calculations, data reporting, and statistical methods.25 The checklist that authors completed during the online submission process is included as part of the published supplemental material for every preclinical article. Importantly, the Stroke journal’s commitment to improve the scientific rigor of the preclinical studies it publishes also led it to create a technical reviewer position that was tasked with reviewing manuscripts and assessing compliance to the preclinical checklist and other reporting requirements. Similar technical reviewer positions have been established at other AHA journals, including Circulation Research, and Arteriosclerosis, Thrombosis, and Vascular Biology. A recent study by Jung et al26 identified improvements in the reporting of key study design elements across several AHA journals but found greater increases in compliance in Stroke and Clinical Research—the 2 AHA journals that had introduced preclinical checklists.In addition to supporting the use of reporting guidelines and checklists such as ARRIVE, the National Institutes of Health has sought to limit the problems of selection bias and limited generalizability that stems from using only male animals in preclinical research by recommending that investigators include animals of both sexes or if not, provide justification for inclusion of only one sex.27 More recently, studies highlighting the adverse impact of circadian mismatch on preclinical neuroprotection studies has led to the recommendation that the details on the time of the day animals were exposed to surgical procedures, behavioral testing, or other treatments (pharmacological or nonpharmacological) be reported routinely.28,29Expansion of Reporting Guidelines and Technical Reviewers for Clinical and Population-Based StudiesThe evidence-based approach taken by Stroke and other AHA journals to evaluate the impact of the preclinical checklist, in conjunction with the use of a technical reviewer, provides the impetus for the expansion of this approach into other areas of the Stroke journal, namely clinical and population-based studies, which it began in summer 2020. Our (small) team of technical reviewers is tasked with assessing the quality of a given submission with the assistance of a relevant reporting guideline. You will typically see our review alongside the 2 (or more) peer reviews and possibly a statistical review.30 We aim to give concise, direct feedback that is aligned with the relevant reporting guideline checklist. On occasion, we may conduct a careful audit of the items on the checklist or provide comments on the study design or statistical methods. The journals’ goal for the peer review process is to publish manuscripts that are more complete, easier for the reader to understand, and potentially replicate or implement.Our early experiences of this expanded role for technical reviewers has not been without its challenges. First, requiring authors to complete a reporting guideline for every preclinical, clinical, and population-based study is not always straightforward given the wide array of different topics, applications, and study designs that are submitted to the Stroke journal. This diversity can make it challenging to identify which reporting guideline is the most applicable for a given study. While every research report is based on some underlying study design, authors frequently don’t identify what study design the report relies on. To aide this process, the Stroke journal will now be requiring the authors to identify and select the most appropriate study design for their manuscript, which in turn will assist in the identification of the most appropriate reporting guideline and checklist to be used. Ultimately, the authors are in the best position to select the best reporting guideline, especially since the guideline should strongly influence the content and formatting of the paper during its preparation. Another issue we have faced is that for some submissions, it is apparent that 2 different guidelines might be equally applicable, for example, one covering the study design (eg, STROBE for an observational cohort study) and one covering the technical methodology (eg, MI-CLAIM [Minimum Information About Clinical Artificial Intelligence Modeling] for machine learning). In such cases, authors are encouraged to submit whatever reporting guidelines they made use of when preparing the manuscript, which could in some cases include more than one checklist.To date, the Stroke journal has only requested that authors complete a reporting guideline checklist at the time of the first resubmission (ie, after the initial review). While the authors have been almost universally compliant with this request, it has been rare to see an author actually make changes to the manuscript in light of the items listed on the checklist. Obviously, the checklist should be more than an exercise in checking boxes. The specific items should be reflected in the paper itself, either in the main text or supplemental material. Our early experiences are that most initial submissions do not cover all the items on a checklist (which typically run around 20–25 in number); in other words, almost all papers require some editing so that they better conform to the recommendations of the guideline and checklist. To fix this problem, the Stroke journal will be testing the approach of having checklists submitted at the time of the first submission. This approach should improve the quality of the initial submissions and make the job of the peer reviewers and editors easier because the information required to make an initial determination of the paper’s scientific merit is less likely to be missing. Finally, there is the problem of word limits and how one should add the additional information required in a checklist. We believe that the best solution is to not regard this as additional information but information that is the core foundation of the article itself. This is another advantage of using a checklist during the manuscript preparation stage, so that these data are built in at the start rather than bolted on at the end. Of course, there may be items for which there is simply not the room to provide sufficient details in the main paper. Fortunately we have a solution for that: the journal encourages authors to include these details in the supplemental material ensuring that it is properly cited in the main text to assist the reader in finding the key information. Additional methodological details and access to underlying research data and analytical code can also be provided via numerous online data repositories (eg, dbGaP, GenBank, NCBI Protein)31 and archival sites (eg, Open Science Framework, Dryad, GitHub).ConclusionsReporting guidelines are one small step along the road to continually improving the quality of a journal, but reporting guidelines and checklists can’t do it all. They are merely guidance documents that assist but do not guarantee that authors write complete and accurate reports. Checklists are just one piece of the puzzle, along with study preregistration, publication of protocols, providing access to source data and analytical code (as summarized in the AHA Transparency and Openness Promotion guidelines),7 and if relevant, adhering to recommendations specific to disparity-based research32,33 that authors need to pay attention to. These tools combined with high-quality peer review and editorial decision-making help ensure the integrity and usefulness of the published content of the Stroke journal. All authors, peer reviewers, and editors have a responsibility to the mission to make the Stroke journal a trusted source of cutting edge and clinically applicable research. With that in mind, we encourage authors to provide feedback to help us and the journal improve its implementation of reporting guidelines and checklists. We are after all, here to help.Article InformationDisclosuresDr Raval reports research funding from the US Department of Veterans Affairs and the Florida Department of Health. Dr Gall reports grant funding from the Minderoo Foundation and support from MTP connect, the National Health and Medical Research Council, the National Heart Foundation, and the National Stroke Foundation of Australia. Dr Reeves reports no conflicts.FootnotesThe opinions expressed in this article are not necessarily those of the editors or of the American Heart Association.For Disclosures, see page 310.Correspondence to: Mathew J. Reeves, PhD, Department of Epidemiology and Biostatistics, Michigan State University, B601 W Fee Hall, East Lansing, MI 48824. Email [email protected]edu

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call