Abstract

While there are multiple metrics for assigning impact and value of authors and published work, there remain substantial limitations. This editorial will briefly introduce current popular metrics and proposes a need for improved methodology in identifying value in the research publishing industry. Using this calculation, the JIF doesn't account for the volume of papers published, nor the months in press prior to the JIF calculation. Perhaps more importantly, this metric does not attribute value to the type of research, the researcher, and the journals where the research is published. In effect, this approach does not adequately incorporate value intrinsic to the authors nor productive research in their respective fields of study. Another important metric in mainstream publishing that, while useful, nevertheless limits ascertainment of value is the h-index.[2] The h-index is a measure of the impact of a researcher based on how often their publications have been cited. For example, if an author in 2022 has an h-index of 15, that means that they have published 15 papers up to 2022 and that each of those papers have received at least 15 citations. On the surface, this seems reasonable. However, there are concerns. First, in the above example, the h-index would only credit the author for 225 career citations (i.e., 15 papers x 15 citations per paper = 225). But what if that same author had 1,500 career citations? This would mean that the h-index credits the author for only 15% (225/1500) of their published work. Secondly, the h-index dimensionally reduces the author value to one number and often used by research publishing professionals in their commissioning strategy. We assert that author value is multidimensional. Because the h-index is a unidimensional metric, its limitations need to be fully recognized. Beyond the h-index, there are other popular but limited approaches that assign value to papers and authors. They include bias based on total number of publications, based on the assumption that publishing volume equates to value, i.e., more volume = more value. Indeed, publishers and their journals tend to bias their content acquisition strategies towards this value archetype, placing less value on early-stage investigators doing phenomenal research, often in nascent fields that are likely to become highly relevant, and possibly disruptive. This trend in disincentivizing disruptive value has large scale effects on research.[3] With the current model, publishers are inclined to go after higher profile authors, with high citations and publications. By recognizing the limitations of popular metrics, research publishing has an opportunity to change its collective thinking and traditional methodologies, and in doing so, open the door to recognizing broader value in authors and papers. These are just some of the questions that the publishing industry and readership should be asking when assessing the true value of authors, papers and research. How then to move forward in aligning value with impact and are there a existing methods for doing so? Part of the solution might benefit from creating an index that measures performance. And more specifically, an index that is a weighted, as a multi-variable algorithm designed to assess an author's output, in the context of citations and percentage of over (or under) performance, over (for example) in an appropriate timeframe. Such an index would add value beyond existing popular methods because it would not penalize early-stage investigators and by contrast might capture early indicators of positive inflection, i.e., disruptive value. Perhaps the time has come to recognize that the biggest, most-established authors may at times but not always reflect disruptive value in relation to the next generation of research questions. A new method might identify new sub-areas of research, and the authors that drive that research, beyond what current methods achieve. For now, at the very least, the development of methods for evaluating research publishing would benefit from including the past, present, and future author and paper performance into new disciplines. Current methods for evaluating ‘impact’ and ‘value’ of authors and research, as stated above, are by design limited. New methods are sorely needed that account for the performance of authors in a myriad of contexts and that include relevant quantitative variables that capture ‘value’ and ‘impact’ to optimize effective communication of progress in scientific research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call