Abstract

Asking questions in classrooms can produce metacognitive judgements in students about their confidence in being able to answer correctly. In audience response systems (ARSs), these judgements can be elicited and used as additional feedback metrics. This study (n = 79) explores how online concurrent item-by-item judgments (OCJ) and retrospective composite judgments of performance accuracy (RJPA) can enhance students’ performance and self-assessing accuracy (i.e., calibration – as measured by sensitivity, specificity, and absolute accuracy index). In each of eight weeks, the students answered a multiple-choice quiz and had to denote their level of confidence that their answers were correct (OCJ) and estimate their final score (RJPA). The quizzes followed the voting/revoting paradigm according to which students answer all the quiz questions, receive feedback, and answer the same questions again before the correct answers are shown. The students were randomly grouped into two conditions based on the feedback they received in the ARS: the OCJ group (n = 41) received the percentage distribution and peers’ OCJs as feedback metrics, while the RJPA group (n = 38) received the percentage distribution and peers’ RJPAs. Data analysis showed a systemic underconfidence that affected students’ OCJ judgments. As a result, students in the RJPA group scored significantly higher than the ones in the OCJ one, were more accurate in self-assessing in the revoting phase, and felt overall more confident in the revoting phase. The study also discusses the relationship between the two judgments employed and the calibration variability between the two study phases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call