Abstract

AbstractOne of the major challenges facing Massive Open Online Courses (MOOCs) is assessing the learner performances beyond traditional automated assessment methods. This leads to a bottleneck problem due to the massiveness of course participants, especially in the context of problem solving. To tackle this issue, peer assessment has been proposed as an effective method. However, the validity of this process is still under discussion, suffers from a lack of credibility and has many weaknesses, particularly with regards to group formation. This paper develops a new method of peer assessment for MOOCs to improve the accuracy and exactitude of the learner grade. Our proposition is based on three main steps: the formation of learner groups, the assessment and synthesis of the results. First, the group definition process can use different elements of the learner model and enables to build heterogeneous groups. After, each learner is required to grade a small number of peer productions. Finally, a synthesis of the various grades is proposed using both data about the ability to assess of each learner and complexity of problems. To evaluate the proposed peer assessment process, we conducted an experimentation devoted to teaching Software Quality Assurance to beginners with computer science during the first university cycle.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.