We in academic roles spend a great deal of time evaluating the publication records of other scholars who are candidates for positions, promotions, tenure, funding, or awards. We may take note of the total or annual number of publications, the number and characteristics of those that have appeared since a point such as completion of the PhD or original academic appointment, the quality or reputation of the journals in which the publications appeared, whether the candidate was the first author or a co-author, or the influence of the publications on subsequent developments in the field. My goal is to encourage evaluators to become more focused and thoughtful in judging publication expectations at various stages of a career and in various contexts. Many judgments fall into two categories: productivity and impact. Impact can be defined loosely as the influence of an article, a journal, or an author's or team's body of publications on the scholarly work in the field. Impact of publications is the focus of much discussion these days, and many metrics have been developed to capture it. All of the impact metrics I have seen are based on citations of journal articles in other journal articles. Scholarly journals are rated and ranked annually based on impact factors. Impact factor of a journal is most commonly calculated as the average citations in the past year per article published by that journal in the two preceding years. If a journal published a total of 100 articles in 2013 and 2014, and these articles were cited 150 times in 2015, the journal's 2-year impact factor for 2015 will be 1.5, reflecting an average of 1.5 citations per article in that journal. This number only reflects very recent citations of fairly recent articles, so groundbreaking articles that appeared in the journal four or more years ago have no effect. Five-year impact factors are more stable over time. Determining appropriate approaches to assessment of impact is a high priority in nations with limited research resources, such as Australia, New Zealand, and the UK, where governments are seeking to invest their scarce funds in projects with the greatest potential to benefit the public (McGilvray, 2014; Morgan, 2014). In general, these national efforts are combining publication-based impact metrics, researcher narratives on their planned pathways to public impact, and peer review. Some of these systems rate departments or universities, and others rate individual scholars. Early scholars and interdisciplinary researchers are not always fairly represented in these calculations (McGilvray), and attempts to accommodate these and other categories have made the scoring and decision-making processes exceedingly complex. Given the high stakes of these evaluations for individual scholars' and departments' continued funding or even employment, the international controversy surrounding these systems is understandable. I would argue that journal impact factors have limited validity for those publishing in nursing journals, because the two-year impact factors of nursing journals are quite labile, and their rankings by impact factor change quite a bit from year to year. Perhaps more meaningful in a field such as nursing are citation records of individual scholars. A popular measure is the h-index (Hirsch, 2005), which is based on the largest number of that author's articles receiving at least that number of citations. For example, an h-index of 8 means that within a given author's articles there are 8 items that have been cited 8 or more times. You can find your own h-index by searching for your name as author in databases such as Web of Science, Scopus, or Google Scholar, adding to or editing the list of publications that appears, and requesting the h-index or citation report. Caveats are called for here, both for journal impact factors and individual citation metrics. Each database taps different journals and therefore locates different numbers of articles and citations (De Groote & Raszewski, 2012), and in the case of Web of Science, only those articles that appeared while the host institution was a Web of Science subscriber are included in the calculation. De Groote and Raszewski used an aggregated h-index to achieve a more comprehensive measure for scholars in the nursing field. A number of other indices have been developed in an attempt to improve on the h-index, by taking the actual number of citations and years of scholarship into account (Conn & Chan, 2015), but one must still rely on the extent to which available databases encompass a given scholar's body of work and the journals in which it has been cited. Furthermore, citation counts are imperfect indicators of real influence. A citation only indicates that another author (or even the original author) referred in a subsequent publication to the work of the author being evaluated. Self-citations should be excluded from citation counts if possible. A citation count does not reflect whether the citing author praised, judged as important, or built on the original work, or referred to it as flawed or of little significance. Too often, in my experience, subsequent authors have misinterpreted the original work or used it inappropriately. It is important to remember that a scholar's impact on the field may not be reflected in citations alone. A researcher's work might inspire a far-reaching clinical practice change that is not reported in the scholarly press. A scholar might use his or her accumulated wisdom to lead a policy team that influences how health care is delivered. Many academics make a lifelong difference in the work of their students and colleagues by sharing their insights on an area of scholarship or providing access to or mentoring in its methods. These types of impact usually must be captured in testimonials of those who gained by contact with the scholar and in peer review of the work as a whole by experts in the field. Despite these caveats, patterns of citation of individual authors' work can provide thought-provoking data on impact. When looking up citations of work by a group of nursing faculty, I found that although most of the research faculty had both strong publication records and stellar citation counts, one or two had numerous publications that had never been cited. By contrast, a faculty member in a clinical career pathway had published only a few articles, but these reports of clinical innovations had hundreds of citations. Perhaps this clinical scholar had a more immediate impact on health care delivery than did the senior scholar. For research-focused faculty, while we look for a substantial list of heavily-cited publications as evidence of a presence in the field, we should also remember that a few ground-breaking articles can change the direction of the research in a topic area. Citations can reflect this kind of impact. In some academic evaluations, productivity is the more important indicator. Productivity could be defined as the scholar's rate of production of the desired type and quality of published scholarship. Counting publications per year, or since a point such as hiring or promotion, is probably the most common way of reporting on productivity. Looking beyond publications, counting number of grants or dollars of funding in a given time period is another. In some fields, conference presentations, media coverage, and other activities are counted too. One of the good reasons for looking at productivity is that past productivity is one of the few available indicators of future productivity. This is most important for early scholars, such as candidates for a first position or a first promotion. The ability to write and publish at a certain rate despite increasing obligations of teaching and service will be necessary for their future scholarly success. Productivity also indicates that the scholar's work has been of sufficient merit to be accepted for publication. An accumulation of work also reveals the scholar's specialization and potential contribution beyond what could be known from only one or two articles. Even for early scholars, however, productivity as measured by publications per year is not a perfect indicator. Say, for example, an assistant professor has published 15 articles, an average of three per year since being hired, and is now at the 5-year mark and applying for promotion to associate professor with tenure. If these 15 publications are works of which the early scholar was a lead author (typically defined in nursing as first author) and demonstrate progressive development of evidence in a clearly defined and important area, this may be an excellent level of productivity in a school focused on research. But what if the 15 articles are co-authored works, because the candidate was a minor player on a prolific team, and the candidate's own contribution was modest or cannot be determined? While this probably would not be sufficient evidence of scholarly independence and ability to be productive in a research-intensive school, it might be praiseworthy in a practice-focused school in which independent scholarship is not the main goal. What if most of the 15 articles are first-authored, but they are on a range of topics, are not research-focused, or were not peer-reviewed or published in respected journals? This record might fail to demonstrate writing ability, progression of scholarship in a topic area, or potential for contribution to a targeted body of knowledge. What if all are from smaller projects for which the researcher has failed to obtain funding? In some contexts, this would be immaterial, while in others funding is viewed as an essential mark of merit and importance. What if all 15 are reiterations of an aspect of the candidate's dissertation or postdoctoral work, and all are directed or co-authored by the original mentor? In many schools, despite general interest in team science, individual scholars need to demonstrate the ability to take the lead on independently conceiving, obtaining funding for, and publishing a scholarly project. Some committees categorize publications based on whether they are peer-reviewed, data-based, and/or research-focused. A research-intensive school would want all or most publications to meet all three criteria, while a practice- or education-focused school might expect that at least some publications are targeted specifically for clinicians or educators. Some evaluators prioritize contributions to nursing journals, and others prefer a variety of disciplinary venues. The raw number of publications cannot be judged independently of other characteristics of the scholar's body of work and the standards, written or unwritten, against which they are being evaluated. Productivity and impact of published work are by no means the only characteristics of a scholar's track record that are reviewed for hiring, promotion, tenure, or grant awards, but in my experience they are common determinants of individuals' success or failure in these evaluation processes. Neither productivity nor impact is a gold standard. Each needs to be evaluated separately and weighted according to context. In general, I think productivity should be weighted more heavily early in a career, and impact later, but as seen in the examples above, each situation demands its own calculus of the characteristics that indicate merit. Scholars should be ready to contest any use of single calculations of productivity or merit as stand-alone indicators of adequacy or excellence in their fields, because the available databases from which these calculations are derived are imperfect and their scope limited. But that doesn't mean that all judgments of scholarly excellence are equally valid or equally unimportant. We need active dialogue leading to an articulation of the minimal requirements for success, in terms of both productivity and impact, in each of our scholarly contexts. As conditions change, rethinking of expectations is warranted. In times of scarce federal funding, for example, should a large NIH R-level research project grant be a minimal requirement for tenure in all research-focused schools of nursing, or do schools do themselves a disservice if they lose faculty on this basis? On the other hand, if a school aims to become more prominent in research, should some type of external funding or a certain number of completed and published research projects become a promotion requirement, and should teaching assignments of faculty with research potential be cut back to make such expectations realistic? Those of us who are senior scholars carry many unspoken beliefs about how to recognize strength or weakness in an individual's publication record. Appointment, promotion, and tenure committees should discuss and amend on an annual basis the metrics and other indicators used to judge publication productivity and impact and make these expectations public, both to rising scholars and to external reviewers of promotion candidates. These are very difficult conversations, but they are essential to allocation of resources for faculty who will produce trustworthy evidence in the field while respecting and retaining the faculty best prepared to educate a future workforce of competent clinicians. These comments may provide some language for a dialogue on our expectations for scholarly productivity and impact.

Full Text

Published Version
Open DOI Link

Get access to 115M+ research papers

Discover from 40M+ Open access, 2M+ Pre-prints, 9.5M Topics and 32K+ Journals.

Sign Up Now! It's FREE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call