Abstract

We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.

Highlights

  • All forms of science communication, including traditional journal articles, involve transforming complicated, often messy data into a coherent narrative form

  • The responses for ecology and evolution researchers were broadly similar to those from the samples of psychologists studied by John et al [17] and Agnoli et al [16] (Table 2)

  • One exception to this is that ecologists were less likely than psychologists or evolution researchers to report ‘collecting more data after inspecting whether the results are statistically significant’

Read more

Summary

Introduction

All forms of science communication, including traditional journal articles, involve transforming complicated, often messy data into a coherent narrative form. [2,3]) has triggered reflection and meta-research about the ways in which this transformation process is susceptible to confusion and corruption. Large scale meta-research and replication projects have not been conducted in ecology and evolution [4,5]. Many of the drivers of low reproducibility in other fields, such as publication bias and inflated type I errors, appear common in ecology and evolution [6,7,8,9]. Jennions and Møller [10] found that 38% of meta-analyses appeared to suffer publication bias, and that adjustments for missing (file drawer) studies changed the statistical conclusion (from statistically significant to non-significant) in 21% of cases. Low statistical power is a long-standing problem in ecology

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.