Abstract
In this essay we explore parallels in the birth, evolution and final ‘banning’ of journal impact factors (IFs) and university rankings (URs). IFs and what has become popularized as global URs (GURs) were born in 1975 and 2003, respectively, and the obsession with both ‘tools’ has gone global. They have become important instruments for a diverse range of academic and higher education issues (IFs: e.g. for hiring and promoting faculty, giving and denying faculty tenure, distributing research funding, or administering institutional evaluations; URs: e.g. for reforming university/ department curricula, faculty recruitment, promotion and wages, funding, student admissions and tuition fees). As a result, both IFs and GURs are being heavily advertised — IFs in publishers’ webpages and GURs in the media as soon as they are released. However, both IFs and GURs have been heavily criticized by the scientific community in recent years. As a result, IFs (which, while originally intended to evaluate journals, were later misapplied in the evaluation of scientific performance) were recently ‘banned’ by different academic stakeholders for use in ‘evaluations’ of individual scientists, individual articles, hiring/promotion and funding proposals. Similarly, URs and GURs have also led to many boycotts throughout the world, probably the most recent being the boycott of the German ‘Centrum fuer Hochschulentwicklung’ (CHE) rankings by German sociologists. Maybe (and hopefully), the recent banning of IFs and URs/GURs are the first steps in a process of academic self-reflection leading to the insight that higher education must urgently take control of its own metrics.
Highlights
Managers, administrators, policy makers, journalists and the public at large all like the simple numerical ordering of people and products because it is readily accessible
It comes as no surprise that both journal impact factors (IFs) and university rankings (URs), either global1 (GURs) or not, were met with both a sense of relief and greed by those who primarily use them (Table 1)
The analysis of the data on those who signed Declaration οn Research Assessment (DORA) as of 24 June 2013, showed that ‘6% were in the humanities and 94% in scientific disciplines; 46.8% were from Europe, 36.8% from North and Central America, 8.9% from South America, 5.1% from Asia and the Middle East, 1.8% from Australia and New Zealand, and 0.5% from Africa’
Summary
Administrators, policy makers, journalists and the public at large all like the simple numerical ordering of people and products because it is readily accessible. It comes as no surprise that both journal impact factors (IFs) and university rankings (URs), either global (GURs) or not, were met with both a sense of relief and greed by those who primarily use them (Table 1). Decisions about the fate of something are made easier, and can be more
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.