Abstract

Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.

Highlights

  • AssessmentGlobal university rankings are an established part of the global higher education landscape

  • In parallel with the International Network of Research Management Societies (INORMS) REWG’s work to rate the global university rankings, the group developed a framework for responsible research evaluation, called SCOPE (Himanen and Gadd, 2019)

  • Intra-class Correlation Coefficients were calculated for the each set of reviews (Table 1) which indicate moderate to good inter-rater reliability (Koo and Li, 2016)

Read more

Summary

Introduction

AssessmentGlobal university rankings are an established part of the global higher education landscape. While the international research management community are not always the ones in their institutions that deal directly with the global university ranking agencies, they are one of the groups that feel their effect most strongly. This might be through their university’s exclusion from accessing studentship funding sources based on its ranking position; through requests to collect, validate, and optimise the data submitted; or through calls to implement strategies that may lead to better ranking outcomes. At the same time as having to work within an environment influenced by university rankings, the research management community are acutely aware of, and concerned about, the perceived invalidity of the approaches they use. The International Network of Research Management Societies (INORMS) Research Evaluation Working Group (2021) decided to dedicate one of their work-packages to developing a tool by which the relative strengths and weaknesses of global university rankings might be surfaced, and used to influence behavioural change both by ranking agencies and those who rely upon them for decision-making

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call