Abstract
SummaryRankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals’ prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross‐citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation ‘export scores’ within the discipline of statistics.
Highlights
The problem of ranking scholarly journals has arisen partly as an economic matter
The rankings of the selected Statistics journals according to Impact Factor, Impact Factor without journal self-citations, five-year Impact Factor, Immediacy Index, and Article Influence Score are reported in columns two to six of Table 4
The four largest outliers from a straight-line relationship are identified in the plot, and it is notable that all of those four departments are such that the ratio
Summary
The problem of ranking scholarly journals has arisen partly as an economic matter. When the number of scientific journals started to increase, librarians were faced with decisions as to which journal subscriptions should consume their limited economic resources; a natural response was to be guided by the relative importance of different journals according to a published or otherwise agreed ranking. Gross and Gross (1927) proposed the counting of citations received by journals as a direct measure of their importance. Garfield (1955) suggested that the number of citations received should be normalized by the number of citable items published by a journal. For various economics and management-related disciplines, the Journal Quality List, compiled by Professor Anne-Wil Harzing and available at www.harzing.com/jql.htm, combines more than 20 different rankings made by universities or evaluation agencies in various countries Such rankings typically are based on bibliometric indices, expert surveys, or a mix of both. In some disciplines the focus is towards broader measurement of research impact through the use of web-based quantities such as citations in social-media sites, newspapers, government policy documents, blogs, etc This is mainly implemented at the level of individual articles, see for example the Altmetric service (Adie and Roe, 2013) available at www.altmetric.com, but the analysis may be made at journal level. The citation data set and the computer code used for the analyses written in the R language (R Core Team, 2014) are made available in the Supplementary Web Materials
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of the Royal Statistical Society. Series A, (Statistics in Society)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.