Abstract

Arguing against the Proposition is Ruimin Ma, M.S. Mr. Ma obtained his B.S. from Shanxi University in 2000 and his M.S. from Wuhan University, Wuhan, People's Republic of China, in 2004. He is currently completing research for his Ph.D. in Informatrics at Wuhan University, where he is an assistant researcher in the Research Centre for Chinese Scientific Evaluation. His major research interest is scientometrics, especially domain analysis, visualization, and evaluation of research competitiveness. He has participated as one of the research leaders in several evaluation projects for Chinese and other universities, and Chinese academic journals, and has prepared consultant reports commissioned by the Ministry of Education of the People's Republic of China and several universities. He has published over 20 papers on research evaluation in both international and domestic journals. The index has been devised by Hirsch1 as a research performance indicator intended to be an improved measure of the impact and quality of the work of an individual researcher. The index is that value where of their papers has at least citations each and the other papers have citations each. Researchers with an index of 30 have, when their papers are ordered by the number of citations received from highest to lowest, their 30th paper having been cited at least 30 times and papers 1–30 having greater than or equal to 30 citations. The index reflects both the number of publications and the number of citations per publication. It is designed to improve upon simpler measures such as the total number of citations or publications. The index works properly only for comparing scientists from the same discipline because citation conventions differ for different disciplines. It has rapidly become an alternative to the more traditional metric of journal impact factor in the evaluation of the impact of the work of a particular researcher.2 Numerous papers published in the literature explore the role of the index further,3–5 including in the discipline of medical physics.6 As only the most highly cited articles contribute to the index, its determination is a relatively simple process. The index can be determined using databases such as Web of Science®, Scopus®, or Google Scholar®. Different databases used to calculate the index for the same researcher, however, often produce a slightly different result, with Google Scholar® having more citations than Scopus® and Web of Science®, with the latter citation collections tending to be more accurate than the former.6 Hirsch calculated the highest value among physicists to be that of Witten from the Princeton Institute for Advanced Study, for whom it is 110, with Hawking having an index of 62.1 Hirsch subsequently demonstrated that the index is highly predictive as to whether a scientist will be elected to a fellowship of a national academy or even awarded a Nobel Prize.7 For physicists, Hirsch suggested that a value for of about 10–12 might be a useful guideline for making a decision regarding tenure at major research universities. A value of 18 might be a useful guideline for a full professorship, 15–20 for fellowship in the American Physical Society, and 45 or higher for membership in the National Academy of Sciences. As a numerical value it can be used to quantify both the productivity and impact of a scientific researcher and is becoming the metric of choice in academic circles for assessment when considering grant allocation, and making offers of employment, tenure, promotion, and fellowship in learned societies.8 The index, which measures the quality and quantity of an author's research papers,1 has become an accepted indicator for evaluation of the research productivity of scientists. However, because of several drawbacks, it is not the best measurement, as I will demonstrate. First, ingenious though the index is, it neglects the dynamicity of citations. The index is significantly influenced by highly cited papers which, once selected, have no influence whatsoever on the computation of the author's index in subsequent years, no matter how their citations increase.9 The , , and indices9,10 have been developed to correct this deficiency. In addition, the index has been shown to be easily influenced by subterfuges such as excessive self-citation. Second, the relevance of the index varies considerably between specialties. It is more logical to apply it to the measurement of research productivity in the natural sciences than humanities and social sciences, for which researchers publish significantly more in monographs rather than journals: These are not included in -index computations. However, even within the natural sciences, citing patterns vary considerably between subjects. For example, the average index for the top ten scientists is 147.1 for the life sciences but only 63.7 in computer science.11 Several methods have had to be developed to address this problem of noncomparability.12–14 Third, the index is highly dependent on the length of time scientists have devoted to research work. The index is biased toward older scientists because younger scientists have had less time to generate sufficient numbers of papers and subsequent citations. It is possible for a scholar who has hardly published any papers for several years to have a high index. Therefore, the index puts newcomers at a disadvantage, potentially hampering the recognition of young but excellent scientists.13 Fourth, the index overvalues the quantity of papers published and undervalues the quality. A researcher with a large number of poorly cited papers will have a higher index than one with half as many very highly cited papers,15 which means it is impossible for a scientist with a limited number of excellent papers to obtain a high index.16 Lastly, the index alone cannot substitute for indicators from traditional bibliometrics. After so many years of application, the combined use of bibliometric indicators, such as number of papers, citations, average citations, and impact factor, is recommended rather than a single index.15 The index alone relies too heavily on the number of papers and total citations.15 It is significant that the standard adopted by the Institute for Scientific Information (ISI) and the Essential Science Indicators (ESI) databases for selection of the most highly cited and “hot” papers considers both the time period of publication and the subject field of a paper and reduces the impact of the total number of citations, in contrast to the index. In summary, it is a complicated process to evaluate a scientist's research productivity. The index should be considered as a supplement to traditional bibliometric indicators but definitely not an omnipotent indicator. The index is neither perfect nor the best. The introduction by Hirsch of the index generated much interest in the academic community and discussion in the literature as to whether the index is the “best” metric to measure how “good” a scientist is.17 In this context, and in order to investigate whether there are better comparative indices than the index, a study was undertaken of nine different Hirsch-type variants on the index, which had been suggested in the literature.18 These variants were the quotient, index, index, index, index, index, and index. The authors found from factor analysis that there are two types of indices: Some describe the most productive “core” of a scientist's output and give the number of papers in that core, and the others describe the impact of the papers in the core. The use of pairs of indices was suggested as a meaningful indicator for comparing scientists, where one index relates to the number of papers in the researcher's productive core (either the or index) and the other relates to the impact of the papers in the researcher's productive core (either the or index). It is noted however that a number of the proposed Hirsch-type indices are actually derived variants of the index, thereby perhaps reinforcing the premise that the fundamental metric, the index, or a variant of, is the best “measure” of a scientist's research productivity. It will perhaps be only after more studies have been published in this area, particularly by those working in the information sciences and researching into evaluative bibliometrics, that the true value of the index will become evident. However, since the index is automatically calculated in the “citation report” function of Web of Science®, it is likely to be used by researchers in the foreseeable future as it does indicate the broad impact of a scientist's cumulative research contributions. Indeed, the index is quite remarkable for its ingenuity and ease of use, and it does measure the quantity and quality of the papers of an author. However, these advantages do not guarantee that it is the best measurement. Dr. Baldock, in citing Ball's paper,2 stated that the index has become an alternative to the impact factor, which is ambiguous. The impact factor evaluates the quality of journals and is not capable of measuring the impact of the work of any specific researcher directly. It is possible that the index might eventually gain stature equal to that of the impact factor, but they have completely different functions. Also, Dr. Baldock, by citing Hirsh,1 demonstrated that the index may be a useful guideline to classify researchers and make decisions on promotion, fellowship, and so on. However I have to indicate that some drawbacks exist in the index's algorithm. For example, author A has published six papers which, respectively, have 20, 15, 9, 7, 6, and 4 citations. In contrast, author B has also published six papers but with citations of 100, 25, 8, 7, 6, and 4, respectively. Obviously, both authors have the same value (viz., 5). The total citations of author B are, however, much higher than those of author A, which means that the index has ignored some significant aspects of the outputs of these researchers. It is not sufficient to apply the index, therefore, without combining it with traditional metric indicators. In addition, Dr. Baldock considered that the index should only be used to compare the outputs of researchers within specific disciplines, which is a major drawback. Some suggestions have been made to address this deficiency.12–14 In summary, whether the index should be widely used to measure the outputs of researchers has still to be adequately tested. It is certainly an interesting and attractive index, but definitely not the best indicator of the impact of a researcher's work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call