Abstract

EDITORIALHow to Measure Science in PhysiologyUlrich PohlUlrich PohlPublished Online:01 Feb 2008https://doi.org/10.1152/physiol.00049.2007MoreSectionsPDF (55 KB)Download PDF ToolsExport citationAdd to favoritesGet permissionsTrack citations ShareShare onFacebookTwitterLinkedInEmailWeChat “Art should never try to be popular. The public should try to make itself artistic. . . . We have been able to have fine poetry in England because the public do not read it, and consequently do not influence it.” —Oscar Wilde, “The Soul of Man Under Socialism” What Oscar Wilde is writing here on arts and poetry cannot be quite the view of the editors of a scientific journal. They want their articles to be read and indeed to become “popular” so that other colleagues working in the field will be made aware of current findings and concepts published by their journal. In fact, they want to make it the one journal that people like to read first for information about the actual essentials of modern physiology. In addition, the society publishing this journal certainly also wishes to set the scientific standards in physiological research and, additionally, to provide an excellent service to those who teach physiology.But how do editors and publishers measure whether the journal and each of its articles meet these goals adequately so that the journal can be developed further? Of course there is the impact factor, a much debated but widely accepted measure that reflects how well an article has been received. This assumes that the number of people citing an article on average is a representative measure of its scientific quality. Indeed, lacking better indexes, the editorial board is working hard to increase it further by trying to invite topics and authors that should be both as current and as interesting as possible and cover the many expanding areas of physiology. A high impact factor, in turn, serves to convince the best researchers in the field to write for Physiology since they can then be certain that their reviews will be widely disseminated and both frequently read and frequently cited. This, in turn, automatically increases the impact factor further and seems at present to be the most promising way to develop Physiology.But is the impact factor of Physiology really telling us what we need to know? More to the point, is it really a good index for review articles? What about those scientists who took important information from an article but—as would seem natural—prefer to cite the original articles on which the review was based? And what about those scientists who use these articles to make their teaching appealing and up to the minute but do not cite the articles in a publicly available way at all? And couldn’t it be that a substantial part of the readership finds it useful to screen each issue of Physiology for interesting new developments regarding adjacent fields of research, an approach that does not immediately result in citations when publishing their own research?In the days of the internet, one would expect that the hits on the contents of a particular journal, on abstracts, and, more to the point, on each of its articles would give this information more precisely. Here, the numbers for Physiology are rather high and have steadily increased over the past years. But experts tell us that there is no way to differentiate between those who just curiously “walked through” and did not seek or find specific information and those who were successful in finding an article matching their expectations and need for information. Moreover, the numbers do not reflect any evaluation of those who have read articles, so we do not know whether “the public . . . tried . . . to make itself artistic.” Solving this problem is, however, a real must. It would not only help the publisher and editors to develop their journal, it would also provide a more adequate index of the impact and the quality (or at least its perception by other people working in the field) of an individual article and its authors. Right now we have a very unsatisfactory situation, but in the long run the impact factor will not remain as the only instrument to measure the quality and successful impact of scientific information. More feedback is necessary if we want to be able to have “fine science,” because we know the public does read it, and consequently influences it.This article has no references to display. Download PDF Back to Top Next FiguresReferencesRelatedInformation More from this issue > Volume 23Issue 1February 2008Pages 2-2 Copyright & Permissions© 2008 Int. Union Physiol. Sci./Am. Physiol. Soc.https://doi.org/10.1152/physiol.00049.2007PubMed18268359History Published online 1 February 2008 Published in print 1 February 2008 Metrics

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call