Abstract

Scientific ranking, usually involved in a metric evaluation of one’s own academic reputation, does not seem to even match with the career advancement in the University context. Paradoxically, members with a low ranking in bibliometric evaluation, usually expressed as number of publications (now known as “research products”), Hirsch’s index (Hind) and citations, cover responsibilities decisively higher than other members with a better ranking position. Although this research has been performed in a single exemplificative Italian Academy, this issue may be expanded to many further Italian Universities, representing a great concern for the advancement of science. Ideally, researchers in the field support the thesis that academic career and scientific rankings walk alongside following meritocratic rules [1], yet, sound analyses in the real world make this enthusiastic consideration somehow controversial [2-7]. Following Gelmini’s law, in 2010, a significant burden of “subjectivity” on the Expert Committee’s evaluation of academic careers, occurring once a candidate participated in earning the National Scientific Qualification for teaching in an Academy, led meritocracy to fail [7]. The apparently shareable consideration that a scientific crew should be empowered in selecting the best candidate suited for the defined project, often points at human and character features, empathy, promptness to obey without discussing head’s ideas, self-denial and a poor creative participation, in order to prevent any conflictual proposal, idea or debate. In this arrangement of skills, scientific rankings cannot bear any real support. Yet, Gelmini’s law introduced important novelties to the complete anarchy in selecting candidates for the Academic career, such as the so-called “medians”, in order to properly link the expertise of the candidate with his/her own scientific reputation, i.e., its scientific ranking compared to at least the half of current Academic experts in the same professional branch. Medians would be a paramount method to evaluate one’s own reputation on the basis that the candidate has exceeded the 50% of the confirmed experts in a nationwide assessment. Notwithstanding, other “personalized” items were involved in the selection route, just causing real perturbating bias in the correctness of the same selection. What have we lost in this dramatic drift far from a true, honest meritocracy? A first, maybe trivial, consideration would be to arrange a cut off of rankings to possibly select a candidate as worth of teaching in an Academy, either as Associate Professor or Full Professor. If rankings are recognized as the only reliable metric to categorize the expertise level of a researcher or a scholar, then primarily an Institution should consider rankings as the leading source of professionality for a defined field of research. In this Editorial, I will address this point.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call