For decades, the UK’s Research Assessment Exercise (RAE) has been a familiar, if unloved, feature of its higher education landscape. Every few years, UK universities have to prepare detailed submissions explaining how their various departments and research units have conducted research, the outputs from that research, and have to provide a mass of statistics on, for example, the numbers of staff in each so-called UoA (Unit of Assessment – i.e. subject area), the amount of research income, numbers of PhD students, and so on. This mass of documentation, including copies of selected published outputs from each nominated academic in the university, is then sent to peer panels comprising experts in the subject for them to evaluate the quality of research from that particular department or unit. The result, after much expenditure of time and money, is a grading for each UoA for each university. To get a high grade is not just a matter of pride or something that can be used in advertising materials; it translates to serious government cash for the relevant department or unit, so can make or break a given department. The results of each RAE are subsequently made public; for example, the results of the last one in 2001 can be found at http://www.hero.ac.uk/rae/Results/. Another RAE was conducted at the end of 2007, and its results will be issued at the end of 2008. There have been many complaints about the RAE over the years; in particular, that it is bureaucratic, expensive to run, involves massive opportunity costs for UK universities and the evaluation suffers all the familiar problems of peer review, i.e. that discussions are confidential, that the panel is biased in favour of well-known departments and in favour of conservative research, is backward looking at past research glories rather than evaluating the future of research in the departments, and is biased against little known individuals or departments, or against interdisciplinary or highly speculative research. It is difficult to judge how justified these criticisms are, though it is clear that the panels who undertake the evaluations make strenuous efforts to be as fair as possible. The RAE, indeed, is arguably schizophrenic as it is trying to be an evaluator of quality of research and at the same time it is acting as a way of distributing money for the future; the two aims are not always compatible. A further complaint is that the RAE represents a ‘double whammy’ for academics, as first they submit their research outputs to a conference, monograph editor or journal editor, and the material is peer reviewed and probably amended, and then the same published outputs are peer reviewed again, possibly by the same people who peer reviewed it in the first place, for the RAE. There are also accusations that the RAE has led to a ‘transfer market’, whereby star academics are offered large sums to move to another employer, as the new employer hopes that the academic’s publications will enhance the RAE return for them. In 2007, the UK Government made the surprise announcement that henceforth (i.e. after the 2007 RAE), for many subjects the RAE based on peer review would be scrapped in favour of the use of bibliometrics to evaluate them. The statement was made by the then Chancellor of the Exchequer in a budget announcement. This was surprising because hitherto the Chancellor had not shown the slightest interest in the matter of research evaluation of UK universities, and it is EDITORIAL