Abstract

Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of "truth inference" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.