Abstract
This article compares six informetric approaches to determine cognitive distances between the publications of panel members and those of research groups in discipline-specific research evaluation. We used data collected in the framework of six completed research evaluations from the period 2009-2014 at the University of Antwerp as a test case. We distinguish between two levels of aggregation – Web of Science subject categories and journals – and three methods: while the barycenter method (2-dimensional) is based on global maps of science, the similarity-adapted publication vector (SAPV) method and weighted cosine similarity (WCS) method (both in higher dimensions) use a full similarity matrix. In total, this leads to six different approaches, all of which are based on the publication profile of research groups and panel members. We use Euclidean distances between barycenters and SAPVs, as well as values of WCS between panel members and research groups as indicators of cognitive distance. We systematically compare how these six approaches are related. The results show that the level of aggregation has minor influence on determining cognitive distances, but dimensionality (two versus a high number of dimensions) has a greater influence. The SAPV and WCS methods agree in most cases at both levels of aggregation on which panel member has the closest cognitive distance to the group to be evaluated, whereas the barycenter approaches often differ. Comparing the results of the methods to the main assessor that was assigned to each research group, we find that the barycenter method usually scores better. However, the barycenter method is less discriminatory and suggests more potential evaluators, whereas SAPV and WCS are more precise.
Highlights
Since the 1980s, a large number of research evaluation programs have emerged in most OECD (Organization for Economic Co-operation and Development) countries, and this on the level of institutions and on national level (OECD, 1997)
For the comparison between the approaches, we reuse the results of the similarity-adapted publication vector (SAPV) and barycenter methods at the level of journals, which were previously obtained by Rahman et al (2016)
We focused on the question which of the approaches best reflect cognitive distance, how much influence the level of aggregation plays, and how much the dimensionality matters
Summary
Since the 1980s, a large number of research evaluation programs have emerged in most OECD (Organization for Economic Co-operation and Development) countries, and this on the level of institutions and on national level (OECD, 1997). Many countries implemented formal policies to assess performance and output of publicly funded research on the national, regional, and Cognitive Distances institutional level (Whitley, 2007; Hammarfelt and de Rijcke, 2015). The United Kingdom’s Research Excellence Framework system for assessing the quality of research in UK higher education institutions is an example of such an informed peer review evaluation (REF2014, 2014). Developing trustworthy ways of recognizing and supporting the “best research” is key to a healthy research environment (Owens, 2013)
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have