Abstract
The analysis and grading of open answers (i.e. answers to open-ended questions) is a powerful means to model the state of knowledge and cognitive level of students in e-learning systems. In a previous work (presented at last SPEL workshop) we showed an approach to open answers grading, based on Constraint Logic Programming (CLP) and peer assessment, where students were defined as triples of finite-domain variables: K for student's Knowledge about the question's topic, C for Correctness of her/his answer, and J for her/his estimated ability to evaluate (“Judge”) another peer's answer. The CLP Prolog module supported the grading process helping eventually to get a complete set of grades although the teacher had actually graded only a (substantial indeed) part of them. Here we try and tackle the problem of grading open answers by an alternative approach, using peer-assessment in a social collaborative e-learning setting, mediated by the teacher through a simple Bayesian-networks-based model, that allows managing student models (based on the same finite-domain variables as above) and producing again automated evaluations of those answers that have not been graded by the teacher. In particular we give an account of the OpenAnswer web-based system, which can allow teachers and students to use our approach, and show the result of some experimentation we conducted.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.