No published methods for research integrity review include both statistical techniques applied to groups of randomized trials and individual assessment of papers. We propose a method based on practical experience of investigating data integrity across the collected papers of an author or author group. We report our approach to investigating the collected papers of an author or author group suspected of academic misconduct. In the investigation of the work of an author or author group, we recommend a systematic search for the work of the involved authors in PubMed, Google Scholar, and the Retraction Watch database, as well as a search of trial registries for unpublished clinical trials. Summary information from studies should be tabulated to assess consistency between study registration, execution, and publication. Each paper should be investigated for unfeasible features of the governance, methodology, execution, results, and reporting of the study. Pairwise comparison of baseline and outcome tables between papers may reveal data duplication or unfeasibly large differences between baseline characteristics in similar studies. Assessment of baseline characteristics from multiple randomized trials using Carlisle's method can determine whether the data are consistent with a properly executed randomization process, as can checking whether reported baseline characteristics follow expected patterns for random variables such as Benford's law. If serious concerns are raised, a more thorough investigation should be performed by journals, publishers, and institutions. These methods provide a systematic and reproducible way to assess the collected work of an author or group of authors. It is increasingly accepted that papers reporting on clinical studies may contain fraudulent or falsified data, often multiple papers by a single author or author group. Based on our experience assessing the research integrity of collections of papers by one author or author group, we present an approach to these investigations that combines published statistical methods with pragmatic assessment of study feasibility. This will help journals and publishers better identify groups of potentially untrustworthy studies.
Read full abstract