Abstract

We examine changes in students’ rating behavior during a semester-long sequence of peer evaluation laboratory exercises in an introductory mechanics course. We perform a quantitative analysis of the ratings given by students to peers’ physics lab reports, and conduct interviews with students. We find that peers persistently assign higher ratings to lab reports than do experts, that peers begin the semester by giving high ratings most frequently and end the semester with frequent middle ratings, and that peers go through the semester without much change in the frequency of low ratings. We then use student interviews to develop a model for student engagement with peer assessment. This model is based on two competing influences which appear to shape peer evaluation behavior: a strong disinclination to give poor ratings with a complementary preference to give high ratings when in doubt, and an attempt to develop an expertlike criticality when assessing peers’ work.

Highlights

  • Peer assessment [1] has been used in classrooms across a broad range of disciplines from second-language writing [2] to conceptual physics [3], and offers the potential for instructors to administer open-ended assignments in large classes without suffering an untenable grading burden [4]

  • Engagement with peer assessment to effect this improvement in accuracy. We measure these changes in student assessment behavior and develop a model for these changes by examining peer assessment at the beginning and end of an introductory mechanics course

  • Our students participated in peer assessment several times throughout the semester-long course, each time producing and assessing content-rich lab reports in the form of 5 minute video presentations of physics experiments

Read more

Summary

Introduction

Peer assessment [1] has been used in classrooms across a broad range of disciplines from second-language writing [2] to conceptual physics [3], and offers the potential for instructors to administer open-ended assignments in large classes without suffering an untenable grading burden [4]. We quantitatively analyze student assessments of these videos at the beginning and at the end of the semester, and interview students to gain a qualitative understanding of their attitudes and practices This understanding is critical for developing a more complete view of peer assessment of physics in particular, and for developing models of student critique and communication of physics concepts in general. Peer assessment in the physics classroom is fundamentally similar to peer assessment in other fields; all peer assessment systems involve students at roughly the same level of education evaluating each other’s work Beyond this basic similarity, though, peer assessment systems may differ widely; the assessments may be anonymous or face to face; they may provide only low-stakes formative assessment, or they may replace instructor grading entirely; participation in the system may be compulsory or voluntary; and groups, pairs, or individuals may assess the work of other groups, pairs, or individuals. While the different forms of peer assessment are numerous, the motivating benefits of peer assessment fall into four broad categories [13]:

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call