Abstract

Researcher behavior is shown to change under assessment. An unexpected time-skew toward most recent papers in each census period was found among the outputs selected by UK academics for the research assessment cycles of the 1990s. This skew changed to a more even time-based distribution for scientists and engineers in later cycles. At the same time, engineers switched their preferred output type for submission, from conference proceedings to journal articles. Social scientists also switched, from monographs to journal art. There was no discussion of these output patterns at the time, or later, but the patterns and their evolution had marked consistency across subjects and institutions. These changes are discussed in terms of consensus and influences on researcher concepts of the evidence of excellence. The increasing availability of citation data in the 1990s and the likely role of citation analysis as a steering factor are noted.

Highlights

  • This paper is about the outputs selected by UK researchers for a series of cyclical assessments and the ways in which the pattern of distribution by document type and by year within each census period changed in successive cycles

  • RAE1992 and RAE1996 data can nominally be reconciled to four REF2014 Main Panels but, where aggregation was required, data were aggregated into domains driven by similarity in publication usage: the analysis underpinning this was originally developed for RAE1996 data and based on clustering Units of Assessment (UOA) according to similarity in journal frequency (Adams, 1998)

  • The relatively selective Thomson Reuters Web of Science database records 90,000 UK authored journal articles indexed per year so the sum of these across each census period would exceed requirements for that Research Assessment Exercise (RAE) cycle

Read more

Summary

Introduction

This paper is about the outputs selected by UK researchers for a series of cyclical assessments and the ways in which the pattern of distribution by document type and by year within each census period changed in successive cycles. The observed skews, and their evolution as more information became available to the researchers, throw light on our understanding of factors influencing expert judgments and status signals among researchers This is not a paper about peer review, which has been thoroughly deconstructed elsewhere (de Rijcke et al, 2016), but it has relevance to researcher judgments about research excellence. For peer evaluation to be valid, we assume that a cognate expert group (e.g., researchers in a specific field) share an unwritten set of standards and are competent in using this to judge achievement. If true, such judgment should apply to choices of material as evidence of research excellence.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.