Abstract

In the spring of 2019, survey research was conducted at Virginia Polytechnic Institute and State University (Virginia Tech), a large, public, Carnegie-classified R1 institution in southwest Virginia, to determine faculty perceptions of research assessment as well as how and why they use researcher profiles and research impact indicators. The Faculty Senate Research Assessment Committee (FSRAC) reported the quantitative and qualitative results to the Virginia Tech Board of Visitors to demonstrate the need for systemic, political, and cultural change regarding how faculty are evaluated and rewarded at the university for their research and creative projects. The survey research and subsequent report started a gradual process to move the university to a more responsible, holistic, and inclusive research assessment environment. Key results from the survey, completed by close to 500 faculty from across the university, include: a.) the most frequently used researcher profile systems and the primary ways they are used (e.g., profiles are used most frequently for showcasing work, with results indicating that faculty prefer to use a combination of systems for this purpose); b.) the primary reasons faculty use certain research impact indicators (e.g., number of publications is frequently used but much more likely to be used for institutional reasons than personal or professional reasons); c.) faculty feel that research assessment is most fair at the department level and least fair at the university level; and d.) faculty do not feel positively towards their research being assessed for the allocation of university funding.

Highlights

  • In November 2018, the ad hoc Faculty Senate Research Assessment Committee (FSRAC) was formed under the direction of the Faculty Senate President at Virginia Polytechnic Institute and State University (Virginia Tech) to explore concerns and grievances regarding the evaluation of faculty research, scholarship, and creative works as well as concerns regarding faculty salary

  • Other findings from these same studies found that researchers maintain strong confidence in traditional citation-based indicators, such as the journal IF (JIF), h-index, and citation counts, despite their grievances and the limitations of these indicators (Blankstein & Wolff-Eisenberg, 2019; Cooper et al, 2017a; Cooper et al, 2017b; Cooper et al, 2018, 2019; Cooper et al, 2017c; Rutner & Schonfeld, 2012; Schonfeld & Long, 2014; Templeton & Lewis, 2015); further research has found that researchers have little to no familiarity with altmetrics, and when they do, they typically do not find them to be valuable for promotion and tenure

  • This paper considers national and international issues that play a role in the evaluation of research, such as world university rankings, the limitations of commercial bibliographic databases, and experts’ recommendations on the use of research impact indicators and databases in formal research evaluation

Read more

Summary

Introduction

A recent study of over 120 million papers published over the past two centuries found that the most trusted and well-known research impact indicators, such as publication counts, citation counts, and the journal IF (JIF), have become increasingly compromised and less meaningful as a result of the pressures to publish and produce high-impact work (Fire & Guestrin, 2019). Despite these findings, researchers’ confidence in citation indicators remains strong (Blankstein & Wolff-Eisenberg, 2019; Buela-Casal & Zych, 2012; Thuna & King, 2017). Virginia Tech Carilion School of Medicine and Fralin Biomedical Research Institute; Virginia-Maryland College of Veterinary Medicine

Results
Limitations
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call