Abstract
Do environmental factors and personal charac teristics of instructors influence students in their assessment of teaching effectiveness? Critics would argue for an evaluation independent of student partici pation. In this paper we discuss implications related to our experiences with student evaluation. Student assessment of teaching skills was made man datory at the College of Business Administration, ing State University, in 1969. This action fol lowed faculty approval of the evaluation concept and definition of a rating scheme. In succeeding years, fac ulty have reaffirmed the principle and continued use of the original instrument. Faculty initial acceptance and subsequent reendorsement are due in part to increased awareness that student assessments would provide fac ulty with valid insights about their abilities to help stu dents learn and, in addition, strengthen claims for re wards. Moreover, there was recognition that students continually judge faculty teaching skills and, in turn, share assessments with other students and often admin istrators and faculty as well. A formal, organized, and sanctioned rating system was thought preferable to the vagaries of the campus grapevine. Despite general faculty endorsement of student eval uation and its application under the Green plan, some faculty argue that student evaluation in gen eral, and the Bowling rating scheme in particular fail to adequately gauge instructional skills or faculty contributions to the learning process. These critics argue that environmental conditions over which faculty have little or no control such as class size, level of course of fering, nature of subject matter, and personal traits such as grading standards and age of faculty influence stu dents' assessments. They argue that distortions occur because of the subjective nature of the evaluation proc ess and student immaturity. Depending upon the kind of influence, assessments of teaching effectiveness will be either depreciated or inflated. Consequently, student feedback neither contributes to the professional growth and development of faculty nor to the creation of a set ting more conducive to learning. Moreover, evaluations cannot be tied to the faculty reward system. It follows, say the critics, that not only should student evaluation be a matter of individual choice but also faculty should be free to experiment with different evaluation schemes if they feel student feedback might be helpful. Administration of the student evaluation scheme within the College of Business is a shared responsibility of the Dean's Office, Departmental Chairmen, faculty, and students. The Dean's Office distributes and collects evaluation instruments, records ratings, and forwards forms to appropriate Departments. Departmental Chair men and/or senior faculty, using evaluations as feed back instruments, counsel faculty about approaches to exploit strengths or to repair shortcomings. The system is designed to promote the counseling function, specifi cally as it relates to the attempts to help junior faculty to become more effective teachers. Faculty are respon sible for assuring students that their participation in the evaluation process will not affect academic standings. Faculty are absent when students complete evaluation instruments, students do not identify themselves on forms, and faculty may review instruments only after final grades have been deposited with the Registrar (see Appendix A). Students are expected to participate in a responsible and honest manner. The Dean's Advisory Council, a group of undergraduate leaders of honorary societies and professional organization in the College, addresses letters to students prior to the distribution of evaluation instruments requesting genuine cooperation (see Appendix B). The evaluation instrument employed in the Green scheme is open-ended?that is, students are requested to describe their instructors' strength or short comings, to identify approaches which might contribute to improved performance, and to assign a particular grade (A-4, B-3, C-2, D-l or F-0). The grades assigned provide an index of performance (see Appendix C). Openended rather than structured, multiple response types of instruments were adopted in order to give stu dents the opportunity to tell it as it is,?to provide evaluators degrees of freedom to define for themselves what constitutes effective teaching, be it the personal qualities of the instructor or learning outcomes. The open ended instrument allows students to describe what turned them on (or off) and is thought especially helpful to faculty as guides to self-improvement. Critics argue that student evaluation and the Bowl ing Green system are defective: students possess neither the skills nor the dispositions to make valid judg ments, their perceptions of instruction quality is influ enced by environmental conditions and personal traits of the faculty. These latter arguments of critics are sum marized in the following null hypotheses:
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have