Abstract

BackgroundFrom 2005 to 2010, we conducted 2 randomized studies on a journal (Medicina Clínica), where we took manuscripts received for publication and randomly assigned them to either the standard editorial process or to additional processes. Both studies were based on the use of methodological reviewers and reporting guidelines (RG). Those interventions slightly improved the items reported on the Manuscript Quality Assessment Instrument (MQAI), which assesses the quality of the research report. However, masked evaluators were able to guess the allocated group in 62% (56/90) of the papers, thus presenting a risk of detection bias. In this post-hoc study, we analyse whether those interventions that were originally designed for improving the completeness of manuscript reporting may have had an effect on the number of citations, which is the measured outcome that we used.MethodsMasked to the intervention group, one of us used the Web of Science (WoS) to quantify the number of citations that the participating manuscripts received up December 2016. We calculated the mean citation ratio between intervention arms and then quantified the uncertainty of it by means of the Jackknife method, which avoids assumptions about the distribution shape.ResultsOur study included 191 articles (99 and 92, respectively) from the two previous studies, which all together received 1336 citations. In both studies, the groups subjected to additional processes showed higher averages, standard deviations and annual rates. The intervention effect was similar in both studies, with a combined estimate of a 43% (95% CI: 3 to 98%) increase in the number of citations.ConclusionsWe interpret that those effects are driven mainly by introducing into the editorial process a senior methodologist to find missing RG items. Those results are promising, but not definitive due to the exploratory nature of the study and some important caveats such as: the limitations of using the number of citations as a measure of scientific impact; and the fact that our study is based on a single journal. We invite journals to perform their own studies to ascertain whether or not scientific repercussion is increased by adhering to reporting guidelines and further involving statisticians in the editorial process.

Highlights

  • From 2005 to 2010, we conducted 2 randomized studies on a journal (Medicina Clínica), where we took manuscripts received for publication and randomly assigned them to either the standard editorial process or to additional processes

  • If we consider both studies together, we find that including a methodological reviewer who is dedicated to looking for missing reporting guidelines (RG) items increases the number of citations by 43% (95%confidence intervals (CI): 3 to 98%), a result that — if this finding is sustained — might justify the cost and time expenditure by the journal [26, 27]

  • The number of papers with zero-citations was higher in the intervention groups of both studies, which raises the possibility that greater transparency deters citations for some kinds of papers

Read more

Summary

Introduction

From 2005 to 2010, we conducted 2 randomized studies on a journal (Medicina Clínica), where we took manuscripts received for publication and randomly assigned them to either the standard editorial process or to additional processes. Masked evaluators were able to guess the allocated group in 62% (56/90) of the papers, presenting a risk of detection bias In this post-hoc study, we analyse whether those interventions that were originally designed for improving the completeness of manuscript reporting may have had an effect on the number of citations, which is the measured outcome that we used. According to the systematic review published by Bruce et al (2016) [8], which we expand on in Section 7 of the Additional file 1, at least 23 randomized trials have studied some aspects of the peer review process, with the majority of them focusing on the quality of peer review as a surrogate outcome while only 3 [15,16,17] analysed the completeness of reporting as an outcome Of these 3 trials that we previously conducted, only 2 [15, 16] found positive results regarding completeness of reporting — only one of these reached statistical significance. We previously explored this relationship with a shorter follow-up (SM, Sections 5 and 6)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call