Abstract

Policy highlights• This paper criticizes a “quick-and-dirty” desktop model for the use of metrics in the assessment of academic research performance, and proposes a series of alternatives.• It considers often used indicators: publication and citation counts, university rankings, journal impact factors, and social media-based metrics.• It is argued that research output and impact are multi-dimensional concepts; when used to assess individuals and groups, these indicators suffer from severe limitations:• Metrics for individual researchers suggest a “false precision”; university rankings are semi-objective and semi-multidimensional; informetric evidence of the validity of journal impact measures is thin; and social media-based indicators should at best be used as complementary measures.• The paper proposes alternatives to the desktop application model: Combine metrics and expert knowledge; assess research groups rather than individuals; use indicators to define minimum standards; and use funding formula that reward promising, emerging research groups.• It proposes a two-level model in which institutions develop their own assessment and funding policies, combining metrics with expert and background knowledge, while at a national level a meta-institutional agency marginally tests the institutions’ internal assessment processes.• According to this model, an inappropriate type of metrics use is when a meta-institutional agency is concerned directly with the assessment of individuals or groups within an institution.• The proposed model is not politically neutral. A normative assumption is that of the autonomy of academic institutions. The meta-institutional entity acknowledges that it is the primary responsibility of the institutions themselves to conduct quality control.• Rather than having one meta-national agency defining what is research quality and what is not, and how it should be measured, the proposed model facilitates each institution to define its own quality criteria and internal policy objectives, and to make these public.• But this freedom of institutions is accompanied by a series of obligations. As a necessary condition, institutions should conceptualize and implement their internal quality control and funding procedures.• Although a meta-institutional agency may help to improve an institution’s internal processes, a repeatedly negative outcome of a marginal test may have negative consequences for the institution’s research funding. This paper discusses a subject as complex as the assessment of scientific-scholarly research for evaluative purposes. It focuses on the use of informetric or bibliometric indicators in academic research assessment. It proposes a series of analytical distinctions. Moreover, it draws conclusions regarding the validity and usefulness of indicators frequently used in the assessment of individual scholars, scholarly institutions and journals. The paper criticizes a so called desktop application model based upon a set of simplistic, poorly founded assumptions about the potential of indicators and the essence of research evaluation. It proposes a more reflexive, theoretically founded, two-level model for the use of metrics of academic research assessment.

Highlights

  • This paper discusses a subject as complex as the assessment of scientific-scholarly research for evaluative purposes

  • Metrics for individual researchers suggest a “false precision”; university rankings are semi-objective and semi-multidimensional; informetric evidence of the validity of journal impact measures is thin; and social media-based indicators should at best be used as complementary measures

  • It focuses on the use of informetric or bibliometric indicators in academic research assessment

Read more

Summary

Publication based

Scientific journal paper; book chapter; scholarly Research dataset; software, tool, instrument; monograph; conference paper; editorial; review video of experiment; registered intellectual rights. Product; process; device; design; image; spin off; registered industrial rights; revenues from ­commercialization of intellectual property

Evaluation
Alternative approaches
Fund institutions according to the number of emerging research groups
Distribute funds within an institution
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call