Abstract
The objective of this study was to analyze bibliometric data from ISI, National Institutes of Health (NIH)-funding data, and faculty size information for Association of American Medical Colleges (AAMC) member schools during 1997 to 2007 to assess research productivity and impact. This study gathered and synthesized 10 metrics for almost all AAMC medical schools(n=123): (1) total number of published articles per medical school, (2) total number of citations to published articles per medical school, (3) average number of citations per article, (4) institutional impact indices, (5) institutional percentages of articles with zero citations, (6) annual average number of faculty per medical school, (7) total amount of NIH funding per medical school, (8) average amount of NIH grant money awarded per faculty member, (9) average number of articles per faculty member, and (10)average number of citations per faculty member. Using principal components analysis, the author calculated the relationships between measures, if they existed. Principal components analysis revealed 3 major clusters of variables that accounted for 91% of the total variance: (1) institutional research productivity, (2) research influence or impact, and (3)individual faculty research productivity. Depending on the variables in each cluster, medical school research may be appropriately evaluated in a more nuanced way. Significant correlations exist between extracted factors, indicating an interrelatedness of all variables. Total NIH funding may relate more strongly to the quality of the research than the quantity of the research. The elimination of medical schools with outliers in 1 or more indicators (n=20)altered the analysis considerably. Though popular, ordinal rankings cannot adequately describe the multidimensional nature of a medical school's research productivity and impact. This study provides statistics that can be used in conjunction with other sound methodologies to provide a more authentic view of a medical school's research. The large variance of the collected data suggests that refining bibliometric data by discipline, peer groups, or journal information may provide a more precise assessment.
Highlights
Bibliometric statistics are used by institutions of higher education to evaluate the research quality and productivity of their faculty
Tenure, promotion, and reappointment decisions are considerably influenced by bibliometric indicators, such as gross totals of publications and citations and journal impact factors [1,2,3,4,5,6]
Due to the important organizational and personnel decisions made from these analyses, these statistics and the concomitant rankings elicit controversy
Summary
Bibliometric statistics are used by institutions of higher education to evaluate the research quality and productivity of their faculty. Tenure, promotion, and reappointment decisions are considerably influenced by bibliometric indicators, such as gross totals of publications and citations and journal impact factors [1,2,3,4,5,6]. Many scholars denounce the use of ISI’s impact factor and immediacy index as well as citation counts in assessing a study’s quality and influence. Major criticisms of reliance on bibliometric indicators include manipulation of impact factors by publishers, individual self-citations [15], uniqueness of disciplinary citation patterns [15, 16], The full versions of Tables 1 and 2 are available with the online version of this journal
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have