Abstract

In our planned study, we shall empirically study the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether citation decisions are (mainly) driven by the quality of cited references. The design of our study is oriented towards the study by Teplitskiy, Duede [10]. We shall undertake a survey of corresponding authors with an available email address in the Web of Science database. The authors are asked to assess the quality of papers that they cited in previous papers. Some authors will be assigned to three treatment groups that receive further information alongside the cited paper: citation information, information on the publishing journal (journal impact factor), or a numerical access code to enter the survey. The control group will not receive any further numerical information. In the statistical analyses, we estimate how (strongly) the quality assessments of the cited papers are adjusted by the respondents to the anchor value (citation, journal, or access code). Thus, we are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation or journal), but also by numbers that are not related to quality, i.e. the access code. The results of the study may have important implications for quality assessments of papers by researchers and the role of numbers, citations, and journal metrics in assessment processes.

Highlights

  • In a recent study analyzing peer review reports, Hammarfelt, Rushforth [1] identified criteria and strategies that reviewers used to decide on candidates for professorships in Sweden

  • It is no surprise to read in the paper that bibliometric indicators play a prominent role in the assessments of the candidates

  • The reason for their use is understandable: since no researcher is an expert in all fields within a discipline, it appears reasonable to resort to metrics in cases of fields that are remote from those that are familiar

Read more

Summary

Introduction

In a recent study analyzing peer review reports, Hammarfelt, Rushforth [1] identified criteria and strategies that reviewers used to decide on candidates for professorships in Sweden. Anchoring effects in the assessment of papers the last round of a selection process and the candidates are very similar in terms of their performance Their previous research papers, funds received and teaching experience have been discussed in great detail. In the example above, it is citation counts that could act as an anchor, and numerical identifiers for the candidates (or any other arbitrary number) presented with the summaries In this preregistration, we present the proposal for a study that is intended to investigate various anchoring effects in the assessment of scientific work. Before we explain our planned research design in more detail below, we present a literature overview of the anchoring-and-adjustment framework (in psychology) This overview is mainly based on two comprehensive overviews published some years ago: Mussweiler, Englich [11]–and the revised chapter published by Bahnik, Englich [12]–and Furnham and Boo [4]. A brief overview of the anchoring effect research can be found in Kahneman [13]

Literature overview
Study design
Randomly generated anchor with irrelevant information
Assessment of only one cited reference
Findings
Assessment of the same cited reference by many respondents
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call