Abstract

The paper examines extent of bias in the performance rankings of research organisations when the assessments are based on unsupervised author-name disambiguation algorithms. It compares the outcomes of a research performance evaluation exercise of Italian universities using the unsupervised approach by Caron and van Eck (2014) for derivation of the universities' research staff, with those of a benchmark using the supervised algorithm of D'Angelo, Giuffrida, and Abramo (2011), which avails of input data. The methodology developed could be replicated for comparative analyses in other frameworks of national or international interest, meaning that practitioners would have a precise measure of the extent of distortions inherent in any evaluation exercises using unsupervised algorithms. This could in turn be useful in informing policy-makers' decisions on whether to invest in building national research staff databases, instead of settling for the unsupervised approaches with their measurement biases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call