Abstract

The nature of the scientific enterprise requires that we, as professionals in the discipline of political science, do all we can to improve the quality of our collective endeavors. This means that we must give serious attention to practical concerns about how to evaluate our departments, our scholars in the field, and our individual research. As Christenson and Sigelman (1985, 964) note, all ideas win equal acceptance, and neither do all the scholars who generate these ideas or all the institutions that house these scholars. Giles, Mizel, and Patterson (1989, 613) state, publication in refereed journals is taken as a sine qua non for success in the discipline. It is generally accepted that tenure and promotion decisions, as well as merit salary increases, are heavily influenced by the quantity and quality of articles published in social science journals (Kawar, 1983; Giles, Patterson and Mizell, 1989). In addition, many library professionals are interested in the accreditation of knowledge for practical reasons. They assume that the quality of a journal affects user demand (Christenson and Sigelman, 1985). This recognition has stimulated recent attempts to rank political science journals to assist decisionmakers in evaluating faculty (see, for example, Giles, Mizel, and Patterson 1989) and presumably to assist library personnel in journal selection. In the first generation of research on this issue, two approaches have been used: the approach by Giles (1975; 1989) and his colleagues; and the approach used by Christenson and Sigelman (1985). The reputational approach surveys a representative sample of political scientists and asks them to evaluate selected journals. Respondents are typically asked to rate each journal in terms of the quality of its articles on a scale from 0 to 10 (i.e., poor to outstanding). The impact approach relies on assessing political science journals by ranking them by the number of citations of articles published in a particular year, divided by the total number of articles published (i.e., the ratio of citations to citable items for a given journal; see Christenson and Sigelman, 1985). Whether or not such attempts assist departmental chairpersons, faculty, and librarians depends, in part, on the credibility of the indices. While both of these approaches have been useful in the initial evaluation of a journal's status, they have limitations: they are based on soft data of rather limited utility or they are based on criteria too restricted for evaluating a journal's total significance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.