Abstract

Journal rankings of specific research fields are often used for evaluation purposes, both of authors and institutions. These rankings can be defined by means of several methods, as expert assessment, scholarly-based agreements, or by the ordering induced by a numeric index associated to the prestige of the journals. In order to be efficient and accepted by the research community, it must preserve the ordering over time, at least up to a point. Otherwise, the procedure for defining the ranking must be revised to assure that it reflects the presumably stable characteristic “prestige” that it claims to be quantifying. A mathematical model based on fractional p-variations of the values of the order number of each journal in a time series of journal rankings is explained, and its main properties are shown. As an example, we study the evolution of two given ordered lists of journals through an eleven-year series. These journal ranks are defined by using the 2-year Impact Factor of Thomson-Reuters (nowadays Clarivate Analytics) lists for MATHEMATICS and PHYSICS, APPLIED from 2002 to 2013. As an application of our model, we define an index that precludes the use of journal ranks for evaluation purposes when some minimal requirements on the associated fractional p-variations are not satisfied. The final conclusion is that the list of mathematics does not satisfy the requirements on the p-variations, while the list of applied physics does.

Highlights

  • Journal rankings have become a basic tool for research and library management

  • It must be said that the interaction among impact factors, journal rankings, and research policy was inherent at the very beginning of the work of Eugene Garfield and the creation of the fundamental bibliometric tools

  • The main examples we are thinking about are the series of the values of the last 11 years of the Clarivate Analytics 2-year Impact Factor list in two given scientific subjects—MATHEMATICS and PHYSICS, APPLIED—that will be analyzed in the Discussion section

Read more

Summary

Introduction

The way in which they can be established are diverse and range from an agreement among a group of experts to a citation-based journal impact index In any case, their use is a fundamental resource for committees and professionals in charge of decisions involving research assessment. Their use is a fundamental resource for committees and professionals in charge of decisions involving research assessment An example of such issues is the procedure used by the Polish office of institutional research and assessment, whose performance-based research funding system is the so-called Comprehensive Evaluation of Scientific Units (CESU). In this case, the evaluation ranking is elaborated by a mixed procedure taking into account the bibliometric indices and the experts’ assessment (see [1,2]). The easiest way of defining a journal ranking is by using a numerical index that measures some aspects of the chosen journals

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.