Abstract

Sometime this month, Thomson Reuters will release the 2012 journal impact factors (IFs). For some, this announcement will be big news, as the scientific community is increasingly locked into a hate-love relationship with this journal metric. Some researchers use it to decide which journal to submit to, readers may use it to decide which papers to read in a long list of search results, and new journals await eagerly to see what kind of ranking they will get as a means to establish their reputation and credibility. In addition, despite widespread recognition that a journal IF is not a valid or appropriate metric for evaluating individual researchers, some funding and tenure committees may look to it in assessing the value of a candidate’s publication record. Discussion of this type of misuse of the IF has been brewing for some time. In fact, Thomson Reuters stated as far back as 2008 that, “Perhaps the most prominent misuse of the Journal Impact Factor is its misapplication to draw conclusions about the performance of an individual researcher.” We the editors at Cell Press wholeheartedly agree, and so do all of the scientists that we speak to. Yet, 5 years after the index creators themselves implored the scientific community not to use IFs to assess individual scientists for promotion, hiring, and funding decisions, the practice persists. Why is that, and what can we do as a community to effect change?As many of you may know, last month, a number of journals, publishers, scientists, and funders signed and published the San Francisco Declaration on Research Assessment (DORA), outlining again the limitations and abuses of IFs and calling for specific actions by stakeholders throughout the scientific community. Although Cell Press declined to sign DORA because there were specific calls to action that we did not feel we could endorse as constructive and appropriate measures, we support the goals of DORA and add our voices and actions to bringing about change in how individual scientists are assessed for hiring, promotion, and tenure.The assessment of science and scientists requires a multidimensional approach. Sadly for everyone who sits on study sections and promotion/hiring committees confronted with large numbers of applicants, there are no shortcuts. Many and arguably the most important of those dimensions are impossible to “metricize,” including the candidate’s track record of making valuable contributions to the advance of research (in terms of both publications and reagents, techniques, and data sets), the creativity of their forward-looking research agenda, the “reach” of their work into adjacent disciplines, their ability to mentor early-career scientists and educate students, their fit within the department, their integrity, and their ability to collaborate effectively. There are no numbers for these.So how do editors think about IFs, and what role do they play in deciding what a journal will publish? Probably to most scientists’ surprise, we do not think about IFs on a daily, weekly, or even monthly basis. Of course, we would like the work that we publish to garner significant attention from the scientific community and be a cornerstone on which subsequent science builds. Typically, this means we would like, as we expect the authors would, that many people read the papers we publish, are inspired by them to consider new avenues in their own research, and therefore cite the papers in their own work. To the degree that this creates a high IF, we are pleased when our IF is high or growing. But first and foremost, our goal is to provide a fast, informed, and rigorous review process that successfully identifies findings that change the way we understand and think about biological processes to create a journal of interest and value to our readers. We publish papers that fulfill this primary objective even if we suspect that that those papers may not be highly cited. By taking this approach, we believe that we add value to underrepresented or under-cited fields by maintaining the breadth of scope of our journals and by bringing work that may otherwise be considered of niche interest to the attention of a wider audience.At a broader level, can any single journal metric, such as the IF, be a valuable or meaningful piece of data on its own? Probably not. Thomson Reuters also supplies a range of other citation-based metrics, including the immediacy index to capture the timeliness of a journal’s impact by measuring same-year citations, the citation half-life to measure the “posterity factor” of a journal, and a 5-year impact factor to give a longer-term measure of the citation activity of a journal’s content. In addition, alternative metrics such as the Eigenfactor, SNIP, and SCIMAGO are all designed to provide different and more nuanced views of journal citation performance based on sophisticated algorithms that take into account a variety of factors. The relative “scores” for journals vary depending on the particular metric. Nevertheless, while very few scientists can quote the immediacy indices, Eigenfactors, or SCIMAGO scores of journals, they often know the IFs. Why is that? How was it decided that the 2-year citation metric is the one meaningful metric that dominates all others? Perhaps it is because there is an intuitive ease about IFs and a 2-year window “feels” right in balancing enough time for other scientists to build on the work and publish without creating too much of a delay in assessing impact. Perhaps it is because of Thomson Reuters’ active and selective marketing of IFs over its other metrics. No one can really say, but it is clear that, although the expediency and objectivity that metrics provide can be appealing and each measure may be valid on its own, to capture the multidimensional value and quality of a journal, any assessment needs to involve a panel of measures that capture different aspects of impact.In support of this ongoing trend, Cell Press will change how we inform the scientific community about our impact factors and will place them in the context of multiple metrics. Several months ago, we added article-level altmetrics to our site so that readers can track the real-time community response to individual papers. In addition, search results on our site include citation information based on Scopus data. When considering this overall array of information, it is important to keep in mind that article-level metrics and measures of social media buzz are just as subject to bias and error as journal-level metrics. Individual articles with relatively few citations/downloads/Facebook Likes published in high-quality journals should not be viewed as “mistakes” or as making a limited contribution to the field. These papers met a very high editorial standard for changing the way that we think about an important biological problem and went through a rigorous peer review evaluation. Some important questions capture the attention of only a small number of researchers, perhaps because they are ideas ahead of their time or require expertise in new technologies; others spark contributions from many labs. As a result, some fields naturally have more active citation patterns than others, but that difference is not a reflection of the interest or importance of the work. We as editors know perhaps better than anyone that this year’s citation sleeper can be a Nobel Prize winner 10 years down the road. We are careful to judge the papers we publish based on the science and not on measures of popularity like citations and downloads, and we encourage readers, funders, and search committees to do the same. So, as the buzz over this month’s IF news comes and goes, we, the editors at Cell Press, will continue focusing on the things that we know are truly important, like exciting, compelling science and talking to authors and reviewers, and we will remember that any rating is just a rating.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call