Abstract

Considering the amount of instructor time devoted to evaluating and grading of essay examinations, one is surprised at the paucity, as well as the relative recency, of research concerned with student involvement in this type of evaluation. Ellis (1950) found among 11 students who were given a pop essay quiz a high correlation (rho^+.Q?) between the stu dents' ratings and his own ratings. His method required that 10 students (all but the one whose question was being rated) con tributed to the rating of each question. Similarly, Nealey (1969), using the combined ratings of groups of 4 or 5 students, wTas able to arrive at a similar correlation (rho=+.92) between student versus instructor ratings on 11 short answer essay ques tions. In both instances, class time was taken to present the instructor's conception of the ideal answer and for the inde pendent evaluation by each team member of the question as signed to him. Peer evaluations have been known to cast doubt on certain time honored practices. Eisenberg (1965), for example, found that peers of graduate students in psychology were able to pre dict with considerable accuracy (rho = +.89) the results of the latter's comprehensive examinations against criterion data ob tained a short time later. Addressing himself to peer and self-evaluation as they relate to grades, Burke (1969) found that by using instructor grades as the criterion, college students were unable to assign their own grades objectively and realistically (p. 448). Too, he found that agreement between peer ratings and instructor ratings was greater than between self-evaluations and instructor ratings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call