Abstract

BackgroundAs a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT).MethodsThis evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass.ResultsDespite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10-6, reported p ≥ 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05).ConclusionsSuch simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data.

Highlights

  • As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims

  • The standard deviations for the volume of packed red blood cell (PBRC) and fresh frozen plasma (FFP) are far too large compared with their mean value (Additional file 2: Table S1b)

  • We proposed a critical appraisal of the results of randomized control trials based on a multisteps procedure (Figure 1): evaluation of the distribution of the reported variables; evaluation of the distribution of the p values reported for the comparison of the baseline characteristics of the two groups; explicit computation of the p values and parametric bootstrap

Read more

Summary

Introduction

As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. Major scientific journals should ask the researchers to provide their raw data to allow an external verification of the results [8] While such policies are lacking, it is currently difficult to verify the accuracy of the published data. It is the editors, reviewers, and readers’ responsibility to appraise the research reports, before translating the results into clinical practice. Our objective was to test whether such simple tools applied to a manuscript known to be fraudulent [10,11] would have helped to detect some warning signals of poor quality Such warning signals would have justified asking the trialists for more detailed information concerning the raw data

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.