ABSTRACT Background and Context Getting timely feedback is important for learning. However, providing individual feedback is a problem in large courses. Peer code review can address this issue and has shown to offer various advantages such as enhanced collaboration among students, improved coding skills, and seeing and criticizing different solution strategies. Objectives The goal is to gain a comprehensive understanding of the contents of voluntary peer reviews and students’ review skills in a first-semester university-based introductory programming course with about 900 enrolled students and no special support or training for conducting reviews. Method A qualitative analysis was conducted on 215 randomly sampled peer reviews as well as the associated code submissions. Qualitative factors such as frequencies of corrections, coding tips, questions, “empty” submissions/feedback, and the sentiment are analyzed. Furthermore, errors in the submissions and mentioned errors in the reviews are investigated exploratively. Findings The analysis results indicate that, in general, students are better in detecting incorrect solutions as incorrect than correct submissions as correct. The feedback is neutral to positive contains a lot of praise, but is rather short and uncertainty is expressed quite often. Students seem to be very nice to each other. Submissions without intensive solution attempts and “empty” reviews are quite rare. Students can correct errors and provide coding tips, however, often do not see subtle errors such as partly incorrect algorithms, typos in method names, or output formatting errors. Implications The results help to train students to write better reviews and to inform educators on how to provide better instruction or support for peer review.
Read full abstract