Abstract

Field ecologists and macroecologists often compete for the same grants and academic positions, with the former producing primary data that the latter generally use for model parameterization. Primary data are usually cited only in the supplementary materials, thereby not counting formally as citations, creating a system where field ecologists are routinely under-acknowledged and possibly disadvantaged in the race for funding and positions. Here, we explored how the performance of authors producing novel ecological data would change if all the citations to their work would be accounted for by bibliometric indicators. We collected the track record of >2300 authors from Google Scholar and citation data from 600 papers published in 40 ecology journals, including field-based, conservation, general ecology, and macroecology studies. Then we parameterized a simulation that mimics the current publishing system for ecologists and assessed author rankings based on number of citations, H-Index, Impact Factor, and number of publications under a scenario where supplementary citations count. We found weak evidence for field ecologists being lower ranked than macroecologists or general ecologists, with publication rate being the main predictor of author performance. Current ranking dynamics were largely unaffected by supplementary citations as they are 10 times less than the number of main text citations. This is further exacerbated by the common practice of citing datasets assembled by previous research or data papers instead of the original articles. While accounting for supplementary citations does not appear to offer a solution, researcher performance evaluations should include criteria that better capture authors’ contribution of new, publicly available data. This could encourage field ecologists to collect and store new data in a systematic manner, thereby mitigating the data patchiness and bias in macroecology studies, and further accelerating the advancement of ecology and related areas of biogeography.

Highlights

  • The last century has seen an exponential growth of scientific productivity (Larsen and von Ins 2010, Bornmann and Mutz 2015), and nowadays, several million papers are published every year in about 10,000 scientific journals

  • The majority of citations made in the journals of each category were mostly directed to journals of the same category (Fig. 2a, Table S3), except for Conservation journals, which mostly cited papers published in Multidisciplinary journals, followed by papers published in Conservation, Field Ecology, and Macroecology

  • In this paper we unveiled the dynamics of citation flows between journals covering different aspects of ecology and analyzed the extent to which these would be modified by supplementary citations, which normally remain undetected

Read more

Summary

Introduction

The last century has seen an exponential growth of scientific productivity (Larsen and von Ins 2010, Bornmann and Mutz 2015), and nowadays, several million papers are published every year in about 10,000 scientific journals. Hiring or funding committees can hardly evaluate the full scientific production of researchers, and it has become increasingly harder to rank highly specialized researchers applying for a broadly described position. This results in highly subjective and hardly reproducible assessments by evaluation committees (Pier et al 2018, Forscher et al 2019), which increasingly rely on quantitative metrics for ranking researchers (Wouters 2014, Chapman et al 2019). The use of IF, despite being repeatedly criticized as a measure of the quality of the papers (Slyder et al 1989, Hicks et al 2015, McVeigh and Mann 2009, Callaway 2016, Chapman et al 2019), is still very influential in the decision-making process of university hiring committees and funding agencies (Callaway 2016, McKiernan et al 2019)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call