AbstractBackgroundPeer assessment has played an important role in large‐scale online learning, as it helps promote the effectiveness of learners' online learning. However, with the emergence of numerical grades and textual feedback generated by peers, it is necessary to detect the reliability of the large amount of peer assessment data, and then develop an effective automated grading model to analyse the data and predict learners' learning results.ObjectivesThe present study aimed to propose an automated grading model with reliability detection.MethodsA total of 109,327 instances of peer assessment from a large‐scale teacher online learning program were tested in the experiments. The reliability detection approach included three steps: recurrent convolutional neural networks (RCNN) was used to detect grade consistency, bidirectional encoder representations from transformers (BERT) was used to detect text originality, and long short‐term memory (LSTM) was used to detect grade‐text consistency. Furthermore, the automated grading was designed with the BERT‐RCNN model.Results and ConclusionsThe effectiveness of the automated grading model with reliability detection was shown. For reliability detection, RCNN performed best in detecting grade consistency with an accuracy rate of 0.889, BERT performed best in detecting text originality with an improvement of 4.47% compared to the benchmark model, and LSTM performed best with an accuracy rate of 0.883. Moreover, the automated grading model with reliability detection achieved good performance, with an accuracy rate of 0.89. Compared to the absence of reliability detection, it increased by 12.1%.ImplicationsThe results strongly suggest that the automated grading model with reliability detection for large‐scale peer assessment is effective, with the following implications: (1) The introduction of reliability detection is necessary to help filter out low reliability data in peer assessment, thus promoting effective automated grading results. (2) This solution could assist assessors in adjusting the exclusion threshold of peer assessment reliability, providing a controllable automated grading tool to reducing manual workload with high quality. (3) This solution could shift educational institutions from labour‐intensive grading procedures to a more efficient educational assessment pattern, allowing for more investment in supporting instructors and learners to improve the quality of peer feedback.
Read full abstract