Abstract

AbstractAcademic journals are the repositories of mankind’s gradually accumulating knowledge of the surrounding world. Just as knowledge is organized into classes ranging from major disciplines, subjects and fields, to increasingly specific topics, journals can also be categorized into groups using various metric. In addition, they can be ranked according to their overall influence. However, according to recent studies, the impact, prestige and novelty of journals cannot be characterized by a single parameter such as, for example, the impact factor. To increase understanding of journal impact, the knowledge gap we set out to explore in our study is the evaluation of journal relevance using complex multi-dimensional measures. Thus, for the first time, our objective is to organize journals into multiple hierarchies based on citation data. The two approaches we use are designed to address this problem from different perspectives. We use a measure related to the notion of m-reaching centrality and find a network that shows a journal’s level of influence in terms of the direction and efficiency with which information spreads through the network. We find we can also obtain an alternative network using a suitably modified nested hierarchy extraction method applied to the same data. In this case, in a self-organized way, the journals become branches according to the major scientific fields, where the local structure of the branches reflect the hierarchy within the given field, with usually the most prominent journal (according to other measures) in the field chosen by the algorithm as the local root, and more specialized journals positioned deeper in the branch. This can make the navigation within different scientific fields and sub-fields very simple, and equivalent to navigating in the different branches of the nested hierarchy. We expect this to be particularly helpful, for example, when choosing the most appropriate journal for a given manuscript. According to our results, the two alternative hierarchies show a somewhat different, but also consistent, picture of the intricate relations between scientific journals, and, as such, they also provide a new perspective on how scientific knowledge is organized into networks.

Highlights

  • Providing an objective ranking of scientific journals and mapping them into different knowledge domains are complex problems of significant importance, which can be addressed using a number of different approaches

  • The most widely known quality measure is the impact factor (Garfield, 1955,1999), corresponding to the total number of citations a journal receives in a 2-year period, divided by the number of published papers over the same period

  • We show that in a somewhat similar fashion, scientific journals can be organized into multiple hierarchies with different types

Read more

Summary

Introduction

Providing an objective ranking of scientific journals and mapping them into different knowledge domains are complex problems of significant importance, which can be addressed using a number of different approaches. The most widely known quality measure is the impact factor (Garfield, 1955,1999), corresponding to the total number of citations a journal receives in a 2-year period, divided by the number of published papers over the same period. It is a rather intuitive quantity, the impact factor has serious limitations (Harter and Nisonger, 1997; Opthof, 1997; Seglen, 1997; Bordons et al, 2002). The development of higher-dimensional quality indicators for scientific journals provides an important objective for current research

Objectives
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call