Abstract

Bibliometric data are relatively simple and describe objective processes of publishing articles and citing others. It seems quite straightforward to define reasonable measures of a researcher's productivity, research quality, or overall performance based on these data. Why do we still have no acceptable bibliometric measures of scientific performance? Instead, there are hundreds of indicators with nobody knowing how to use them. At the same time, an increasing number of researchers and some research fields have been excluded from the standard bibliometric analysis to avoid manifestly contradictive conclusions. I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics. Clinging to this historical choice excludes any systematic and logically consistent bibliometrics-based evaluation of researchers, research groups, and institutions. During the last 50 years, several authors have called for a change. Apparently, there are no serious methodologically justified or evidence-based arguments in the favor of the present system. However, there are intractable social, psychological, and economical issues that make adoption of a logically sound counting system almost impossible.

Highlights

  • During the past few decades, the quantitative measurement of scientific performance has started to play an important role

  • One may conclude that measurement of scientific performance is very complex and necessarily subjective

  • I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics

Read more

Summary

INTRODUCTION

During the past few decades, the quantitative measurement of scientific performance has started to play an important role. H-index (Hirsch, 2005) is probably the most popular bibliometric indicator that has been advertised as a measure of individual scientist’s output This index uses a clever combination of publication and citation counts that discounts few accidentally high citation results and makes the indicator more robust compared to simple total citations. Clarivate publishes a list of highly cited researchers, the well-known ranking of individual scientists based on traditional whole-count bibliometrics. This list includes the authors of the articles that rank in the top 1% by citations for research field and year of publication. The simple division is appropriate for all disciplines and supports interdisciplinary comparison as well

A REVISIONIST FROM ITALY
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call