Abstract
Student evaluation of college professors is a phe nomenon that dates back to the Middle Ages. At the University of Padua in Italy, for example, the students hired their own professors (Werdell, 1967). But tradi tion in American education has not provided much in the way of student evaluation of professors. Student evaluation in the United States began at Harvard in 1924 when students published a Confidential Guide to Courses (Eble, 1970) which gave a review of students' ratings of courses, professors, examinations, etc. This publication is still doing a thriving business. At about the same time, the University of Washington began an evaluation program, and in 1954 the University of Michigan launched a professor evaluation program (Slobin and Nichols, 1969). Developing scales for student evaluation of instruc tion began about 20 years ago in a real systematic man ner with the Purdue Rating Scale (Remmers and Elliott, 1950). This is a graphic ten point rating scale consisting of ten qualities of a teacher. The scale can be used to develop profiles for each faculty member against norms that have been developed. Factor analytic studies of instructional rating scales (Isaacson, Mc Keachie, Milholland, et al., 1963, 1964; Cosgrove, 1959; Cof?man, 1954; Hoffman, 1963) generally found from three to seven consistent factors, with the following five factors common to most rating scales: (1) ability in presentation, (2) stability and fairness in work load, (3) organization, (4) positive response and feedback, (5) enthusiastic, friendly and constructive manner. The fact that students can be ''experts in evaluating instruction has long been in contention among faculty and students (McKeachie, 1969a). There are many arguments, most of which are not supported by evidence, that the student does not recognize effective and good instruction. Some statements to support this argument are: cannot really evaluate a teacher until they have left college and obtained some perspective on what was really valuable to and rate teachers on their personalities?not on how much they've learned (McKeachie, 1969b, p. 214). Thus, a curious and rather enigmatic teacher/student status conflict seems to have developed. Further enumeration of frequent objections to stu dent evaluations of teaching and the subsequent lack of evidence for them have been stated by Slobin and Nichols (1969) as follows: (1) Student ratings are influenced by variables irrelevant to teaching. But studies show that such factors as age of student, sex of stu dent and instructor, students' grades, etc. are not cor related with student ratings of instruction. (2) Student ratings reflect only the instructor's personality. This may be true if the rating forms are poorly constructed, but it is possible to construct questionnaires which do indeed tap areas other than personality. (3) Students cannot evaluate the goals of teaching. Students are not being asked to set the goals but are asked to evaluate how well the teacher is achieving his goals. (4) A man should be judged by his peers. Student evaluations do not violate this. Peers are not expected to be the best judges of the goals of teaching. (5) Overemphasis on teaching has bad consequences. This could be an objec tion only if good teaching is not essential. Slobin and Nichols (1969) quoted E. R. Guthrie as saying, It is well to remember that student evaluation is continuous and inescapable. The only question is whether or not we care to know what it is. Many of the questions about student evaluations have been partially answered, if not resolved. Reliability of student ratings has been shown to be high if there are 25 raters or more (Shock, Kelly, and Remmers, 1927). Student grades, past, present or expected, are generally unrelated to evaluation of instructors (Rem mers, 1930; Elliott, 1950; Voeks and French, 1960; Garverick and Carter, 1962). While the student may not be competent to judge the instructor's knowledge and mastery of concepts in his field, it appears that the student should be able to judge how well the instructor gets his subject across to the student. Solomon (1964) observed that the student is able to reasonably assess classroom performance. Thus a basis of reliability in student rating of instructors appears to have been es tablished to a moderate degree. Further reliability ques tions about stability of instructor ratings are posed; e.g. what is the consistency of an instructor's rating on teaching a given course when measured twice in the same semester (same students) compared to his rating on the same course the following semester (different group of students) ? And what are the effects on rating, stability after the instructor has been given the results of prior ratings? The purpose of this study was twofold: first to de termine whether students' ratings of instructors, on certain selected variables, are reliable over a period of Ill
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.