Abstract

Universities are increasingly evaluated on the basis of their outputs. These are often converted to simple and contested rankings with substantial implications for recruitment, income, and perceived prestige. Such evaluation usually relies on a single data source to define the set of outputs for a university. However, few studies have explored differences across data sources and their implications for metrics and rankings at the institutional scale. We address this gap by performing detailed bibliographic comparisons between Web of Science (WoS), Scopus, and Microsoft Academic (MSA) at the institutional level and supplement this with a manual analysis of 15 universities. We further construct two simple rankings based on citation count and open access status. Our results show that there are significant differences across databases. These differences contribute to drastic changes in rank positions of universities, which are most prevalent for non-English-speaking universities and those outside the top positions in international university rankings. Overall, MSA has greater coverage than Scopus and WoS, but with less complete affiliation metadata. We suggest that robust evaluation measures need to consider the effect of choice of data sources and recommend an approach where data from multiple sources is integrated to provide a more robust data set.

Highlights

  • Bibliometric statistics are commonly used by university leadership, governments, funders, and related industries to quantify academic performance

  • We aim to provide a deep exploration in comparing the coverage of research objects with DOIs in WoS, Scopus, and MSA3, in terms of both volume and various bibliographic variables, at the institutional level

  • Our goal is to identify the effects of coverage on the discoverability of sets of outputs that would be evaluated using an external source of data

Read more

Summary

Introduction

Bibliometric statistics are commonly used by university leadership, governments, funders, and related industries to quantify academic performance This in turn may define academic promotion, tenure, funding, and other functional facets of academia. This obsession with excellence is highly correlated to various negative impacts on both academic behavior and research bias (Anderson, Ronning, et al, 2007; Fanelli, 2010; van Wessel, 2016; Moore, Neylon, et al, 2017). These metrics (such as citation counts and impact factors) are often derived from one of the large bibliographic sources, such as Web of Science ( WoS), Scopus or Google Scholar (GS). Scopus is utilized by QS University Rankings and THE World University Rankings for citation counts, while Academic Ranking of World Universities makes

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call