Abstract
In the evaluation of research, the same unequal structure present in the production of research is reproduced. Despite a few very productive researchers (in terms of papers and citations received), there are also few researchers who are involved in the research evaluation process (in terms of being editorial board members of journals or reviewers). To produce a high number of papers and receive many citations and to be involved in the evaluation of research papers, you need to be in the minority of giants who have high productivity and more scientific success. As editorial board members and reviewers, we often find the same minority of giants. In this paper, we apply an economic approach to interpret recent trends in research evaluation and derive a new interpretation of Altmetrics as a response to the need for democratization of research and its evaluation. In this context, the majority of pygmies can participate in evaluation with Altmetrics, whose use is more democratic, that is, much wider and open to all.
Highlights
DaraioJournal of AltmetricsWe live in a society of evaluation (Dahler-Larsen 2011; Gläser & Whitley 2007)
We believe the skewness of bibliometric indicators highlights the inequality among scholars and institutions
This paper offers an interpretation of Altmetrics within existing current trends in the evaluation of research
Summary
DaraioJournal of AltmetricsWe live in a society of evaluation (Dahler-Larsen 2011; Gläser & Whitley 2007). Assessment is complicated by the quantification of data and data processing for use in different contexts and for different purposes (Carson 2020; Daraio & Glänzel 2016), including process monitoring, input-output monitoring, and ex-ante and ex-post evaluation. In this case, there is a need to specify standards and rules for metadata definition and quantification (for additional details and references, see Daraio & Glänzel 2016).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have