Abstract

Peer assessment is a method that has shown a positive impact on learners' cognitive and metacognitive skills. It also represents an effective alternative to instructor-provided assessment within computer-based education and, particularly, in massive online learning settings such as MOOCs. Various platforms have incorporated this mechanism as an assessment tool. However, most of the proposed implementations rely on the random matching of peers. The contributions introduced in this article are intended to step past the randomized approach by modeling learner matching as a many to many assignment problem, and then its resolution by using an appropriate combinatorial optimization algorithm. The adopted approach stands on a matching strategy that is also discussed in this article. Furthermore, we present two key steps on which both the matching strategy and the representation of the problem depend: 1) modeling the learner as an assessor, and 2) clustering assessors into categories that reflect learners’ assessment competency. Additionally, a methodology for increasing the accuracy of peer assessment by weighting the scores given by learners is also introduced. Finally, compared to the random allocation of submissions, the experimentation of the approach has shown promising results in terms of the validity of assessments and the acceptance of peer feedback.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.