Abstract

Universities are being compared according to a number of rankings and evaluations designed by several institutions and agencies. A qualified well-designed ranking system helps future undergraduate and graduate students make decisions about their studies following an appropriate set of quality parameters such as attractiveness, research performance, student satisfaction, graduate employment, etc. However, the world rankings operate with different metrics, datasets, methods and a limited number of indicators based on their availability. Less reliable or context-free data are often utilised. These discrepant approaches of composite indices produce inconsistent results which casts doubt on the veracity of the rankings. Admittedly, internal and external evaluations of the quality of teaching and research are carried out periodically and other bodies justify the universities according to quality assurance standards. In addition, governments require the publishing of annual reports, long-term strategic plans or recommend the universities carrying out a self-evaluation. All in all, the external pressures follow a “one size fits all” standard. This approach defines a global ideal by the framework and structure of parameters and thus neglects the importance of the diversity and context in which universities operate. A new approach is advocating a ranking-free setting of strategic objectives which would integrate the intellectual capital approach, annual reporting and long-term plan into one coherent strategic management system. This set of flexible indicators, included in the intellectual capital categories, is proposed to be part of the annual report and is aimed at monitoring the proper strategic goals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call