Abstract
Bibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices. We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.
Highlights
In Toward a Metric of Science: The Advent of Science Indicators (Elkana et al 1978), the new field of science indicators and scientometrics was welcomed by a number of authors from the history and philosophy of science, the sociology of science
We argue that the ambivalences around the use of bibliometric indicators are not accidental but inherent to evaluation practices (Rushforth and de Rijcke 2015)
The ambivalences around the use of data and indicators described in the sections above concern these stakeholders to variable extents
Summary
In Toward a Metric of Science: The Advent of Science Indicators (Elkana et al 1978), the new field of science indicators and scientometrics was welcomed by a number of authors from the history and philosophy of science, the sociology of science We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. While normalized indicators were initially used only by professional bibliometricians (group 2), they have recently been recommended for more general use (Hicks et al 2015) and have increasingly become the standard in evaluation practices (Waltman 2016).
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have