Abstract

It seems clear that the use of systematic student ratings of college teaching has increased since Astin and Lee's study [1] of a decade ago. Seldin [40] found that more than half of the private liberal arts colleges that he surveyed now use systematic student ratings. The collection and use of other kinds of teaching evaluation data (i.e., informal student opinions, classroom visits, colleagues opinions, scholarly research and publication, student examination performance, course syllabi and examinations, long term follow-up of students, alumni opinions, and graduate distributions) seems to have decreased in Seldin's study, with the exception of committee evaluations. Likewise, Riggs [37] reported a high use of student ratings in AACTE schools (86 percent in 1975). It would appear that student ratings have become an important, if not the most important, kind of systematically collected data considered in evaluating college teaching. The use of student ratings as a means of evaluating college teaching has caused a considerable stir in higher education. Researchers have conducted numerous studies aimed at examining the validity of student ratings in terms of such validity criteria as student achievement, course grades, and instructor behavior [13, 15, 21, 22, 30, 49, 53]. In general, the results have been mixed and the validity issue still appears to be far from settled. While the research regarding the reliability of student ratings is generally supportive [13], Seiler et al. [39] report that the reliability of student ratings over time is low (accounts for less than 40 percent of the variance). Some faculty unions have limited, through collective bargain-

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call