Abstract

The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so‐called “excellence indicator” PPtop‐10%—the proportion of the top‐10% most‐highly‐cited papers assigned to a university—we pursue a classification using (a) overlapping stability intervals, (b) statistical‐significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco‐system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well‐defined divisions between groups of universities.

Highlights

  • Following the introduction of the “Shanghai rankings” of universities in 2004 (Academic Ranking of World Universities, ARWU, 2004), a quasi-industry of university rankings has emerged (e.g., Shin, Toutkoushian, & Teichler, 2011)

  • All universities in the Leiden Ranking 2017 (n = 902) Nine hundred of the 902 universities in the LR are linked into the largest component on the basis of overlaps in the stability intervals

  • We have analyzed the significance of differences in scores of universities on the LR 2017 in terms of effect sizes, stability intervals, and using the z-test

Read more

Summary

Introduction

Following the introduction of the “Shanghai rankings” of universities in 2004 (Academic Ranking of World Universities, ARWU, 2004), a quasi-industry of university rankings has emerged (e.g., Shin, Toutkoushian, & Teichler, 2011). There is some consensus about a group of most-elite universities, differing parameters and models may have considerable effects on lower-ranked universities. From this perspective, the reliability of rankings is low. For example, have funded state universities hitherto often using a scheme which assumes equality among them. Using a similar methodology, Ville et al (2006) found decreasing inequality in research outputs among Australian universities during the period 1992-2003. Universities appear to have become more similar

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.