Abstract

Scientific fraud is an increasingly vexing problem. Many current programs for fraud detection focus on image manipulation, while techniques for detection based on anomalous patterns that may be discoverable in the underlying numerical data get much less attention, even though these techniques are often easy to apply. We employed three such techniques in a case study in which we considered data sets from several hundred experiments. We compared patterns in the data sets from one research teaching specialist (RTS), to those of 9 other members of the same laboratory and from 3 outside laboratories. Application of two conventional statistical tests and a newly developed test for anomalous patterns in the triplicate data commonly produced in such research to various data sets reported by the RTS resulted in repeated rejection of the hypotheses (often at p-levels well below 0.001) that anomalous patterns in his data may have occurred by chance. This analysis emphasizes the importance of access to raw data that form the bases of publications, reports and grant applications in order to evaluate the correctness of the conclusions, as well as the utility of methods for detecting anomalous, especially fabricated, numerical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call