Abstract
Abstract In 2001 onetime and since 2014 annually, Clarivate (and the former Thomson Reuters) has used publication and citation data to identify exceptional researchers—highly cited researchers (HCRs)—in nearly all disciplines. The approach used by Clarivate has not been without criticism. HCRs can be defined differently; the approach of Clarivate is one possibility among several others. HCRs can be identified by considering field-normalized citation rates or absolute numbers of citations; inclusion or exclusion of self-citations; full counting or fractional counting of publications; all authors, only corresponding authors or only first authors; short, long or varying citation windows; and short or long publication periods. In this study, we are interested in the effect different approaches have on the empirical outcomes. One may expect HCRs lists with large overlaps of authors, since all approaches are based on the same (bibliometric) data. As we demonstrated with five different variants of defining HCRs, the selection among these options has a significant influence on the sample of selected researchers and their characteristics that are thereby defined as highly cited. Some options have a stronger influence on the outcome than other options such as the length of the citation window or the focus on all authors versus only the corresponding author. Based on the empirical results of this study, we recommend that the user of HCR lists should always be aware of the influence these options have on the final lists of researchers.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have