Abstract

The use of peer assessment for open-ended activities has advantages for both teachers and students. Teachers might reduce the workload of the correction process and students achieve a better understanding of the subject by evaluating the activities of their peers. In order to ease the process, it is advisable to provide the students with a rubric over which performing the assessment of their peers; however, restricting themselves to provide only numerical scores is detrimental, as it prevents providing valuable feedback to others peers. Since this assessment produces two modalities of the same evaluation, namely numerical score and textual feedback, it is possible to apply automatic techniques to detect inconsistencies in the evaluation, thus minimizing the teachers' workload for supervising the whole process. This paper proposes a machine learning approach for the detection of such inconsistencies. To this end, we consider two different approaches, each of which is tested with different algorithms, in order to both evaluate the approach itself and find appropriate models to make it successful. The experiments carried out with 4 groups of students and 2 types of activities show that the proposed approach is able to yield reliable results, thus representing a valuable approach for ensuring a fair operation of the peer assessment process.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.