Abstract

Massive open on-line courses (MOOCs) are effective and flexible resources to educate, train, and empower populations. Peer assessment (PA) provides a powerful pedagogical strategy to support educational activities and foster learners' success, also where a huge number of learners is involved. Item response theory (IRT) can model students' features, such as the skill to accomplish a task, and the capability to mark tasks. In this paper the authors investigate the applicability of IRT models to PA, in the learning environments of MOOCs. The main goal is to evaluate the relationships between some students' IRT parameters (ability, strictness) and some PA parameters (number of graders per task, and rating scale). The authors use a data-set simulating a large class (1,000 peers), built by a Gaussian distribution of the students' skill, to accomplish a task. The IRT analysis of the PA data allow to say that the best estimate for peers' ability is when 15 raters per task are used, with a [1,10] rating scale.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.