Abstract

The use of bibliometric measures in the evaluation of research has increased considerably based on expertise from the growing research field of evaluative citation analysis (ECA). However, mounting criticism of such metrics suggests that the professionalization of bibliometric expertise remains contested. This paper investigates why impact metrics, such as the journal impact factor and the h-index, proliferate even though their legitimacy as a means of professional research assessment is questioned. Our analysis is informed by two relevant sociological theories: Andrew Abbott’s theory of professions and Richard Whitley’s theory of scientific work. These complementary concepts are connected in order to demonstrate that ECA has failed so far to provide scientific authority for professional research assessment. This argument is based on an empirical investigation of the extent of reputational control in the relevant research area. Using three measures of reputational control that are computed from longitudinal inter-organizational networks in ECA (1972–2016), we show that peripheral and isolated actors contribute the same number of novel bibliometric indicators as central actors. In addition, the share of newcomers to the academic sector has remained high. These findings demonstrate that recent methodological debates in ECA have not been accompanied by the formation of an intellectual field in the sociological sense of a reputational organization. Therefore, we conclude that a growing gap exists between an academic sector with little capacity for collective action and increasing demand for routine performance assessment by research organizations and funding agencies. This gap has been filled by database providers. By selecting and distributing research metrics, these commercial providers have gained a powerful role in defining de-facto standards of research excellence without being challenged by expert authority.

Highlights

  • In recent years, the use of citation impact metrics has increased considerably

  • This paper presents a method to empirically investigate the extent of reputational control in intellectual fields

  • This method consists of defining a set of comparable inventions within a circumscribed research area and determining the origin of these inventions within a scientific collaboration network

Read more

Summary

Introduction

The use of citation impact metrics has increased considerably. Some of these metrics are new, whereas some are variants or refinements of existing methods to measure the scientific impact of published research [1, 2]. The comparative literature on the governance of higher education shows that citation-based metrics have rarely gained a prominent place on the macro-level of national research funding systems [9,10,11] Rather, their use as performance indicators can be understood as part of a broader set of organizational controlling techniques in response to changed expectations from their environment [12, 13]. The literature stresses increasing demands for accountability in national governance of public research and higher education, including international rankings and league tables that promote political discourse regarding global competition among research organizations for scientific prestige and talent [14,15,16]. These demands for accountability are viewed as a broader trend towards an audit society [17, 18], or as a manifestation of neoliberal ideology in the governance of higher education [19,20,21]

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call