Abstract

In my previous editorial, I drew attention to what I referred to as the folly of impact factors. Nowadays, journals are ranked almost only according to the impact factor, which is based on the number of citations to articles in a journal in the two calendar years after the year the articles are published. Many tricks are used to pimp the impact factor, some of which I discussed previously. What might be better tools to assess the value of scientific work? Some easy solutions might already be helpful. Rather than looking at the impact factor of a journal, it is very easy to look at the citations of an individual article. And maybe not citations are the marker of relevance but the actual use of the work. The numbers of downloads, for instance, is a good measure how often the work drew attention. Maybe work that is cited in a review needs to be valued differently than work that is used to build further science. But these are still just quantitative measurements that are at the best indicators, but not really outcome measures of scientific quality. A qualitative approach would have more value but is also very difficult and inevitable based on subjective opinion. However, the objectivity of the numerical approach is also quite questionable. Real scientific breakthroughs are not always in the best journals, an old experience exemplifies that. When we discovered that the t(14;18) can be found in healthy individuals, this article was rejected by several journals: a quote from a reviewer from Science: I do not know what they did wrong, but the results cannot be true. A translocation indicates cancer. Less than a year after the article had been published (in Oncogene), I presented the data in a lecture, and the audience reacted: of course, a translocation is not enough to lead to cancer, it requires more. Our work had been repeated by several groups within months and was within a year common knowledge; the article was cited 37 times the 2 years after publication (good for Oncogene) and is now cited more than 250 times. Fine, but this is quite low compared to the impact it had. I guess quite a few of you have a similar experience: it is not your best work that reaches the best journals and is recognized as such. Can the crowd do a good job? We know that there are several areas where the knowledge of the crowd indicates value. How about looking among let say hematopathologists, what they read and use, in addition to actual citations? Or looking whether the results are confirmed rather than rejected? I believe in this era of Bbig data,^ such approaches can give very interesting new data.... To be continued!! * J. H. van Krieken Han.vanKrieken@radboudumc.nl

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call