Abstract
AbstractMost massive open online courses (MOOC) use simple schemes for aggregating peer grades, taking the mean or the median, or compute weights from information other than the instructor's opinion about the students' knowledge. To reduce the difference between the instructor and students' aggregated scores, some proposals compute specific weights to aggregate the peer grades. In this work, we analyse the use of students' engagement and performance measures to compute personalized weights and study the validity of the aggregated scores produced by these common functions, mean, and median, together with two others from the information retrieval field, the geometric and harmonic means. To test this procedure, we have analysed data from a MOOC about Philosophy. The course had 1,059 students registered, and 91 participated in a peer review process that consisted in writing an essay and rating three of their peers using a rubric. We compared the aggregation scores obtained using weighted and nonweighted versions of the functions. Our results show that the correlation between the aggregated scores and the instructor's grades can be improved in relation to peer grading, when using the median and weights are computed according to students' performance in chapter tests.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.