Abstract

Over the last several years, Massive Open Online Courses (MOOCs) have received significant coverage in the higher education literature. They are the most recent development in open online distance learning. Thus, the most challenging is designing an accurate method to evaluate and provide feedbacks, especially for open questions (especially Problem Situation), since the high number of learners. To tackle this problem, MOOCs use peer assessment techniques (known as peer grading) that suffer from a lack of credibility. In this paper, we present a new method for peer assessment in the Massive Open Online Courses, in order to improve the accuracy of grading results. Our proposition is divided into three (3) steps: clustering unit, assessment and treatment of the results. The clustering unit is the task of grouping learners with similar profiles. Clustering unit aims to group learners based on the parameters stored on learners' modeling within the MOOCs. After the clustering unit, the learners are required to grade a small number of their peers' tasks as part of their own task. Afterwards, the scores are dispatched for treatments where a synthesis is given for assessment. To assess the feasibility of the proposed peer assessment, we report here the results of the tests conducted on the developed prototype.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.