Abstract

A category learning judgment (CLJ) involves judging one’s learning or performance for a given topic or category. The present study was the first to investigate CLJs in a classroom, where students’ judgments of how well they have learned topics may be particularly relevant for guiding their study decisions. In an undergraduate statistics class, students predicted their performance on six different exam topics, as well as predicting their global exam performance, for each exam during the semester. Regarding the absolute accuracy of CLJs, we observed slight overestimation (bias), substantial deviation from accuracy (absolute bias), and little improvement across exams. Students’ CLJs varied among topics, but they were less variable than actual topic performance and were poor at discriminating well-learned from poorly-learned topics (i.e., low relative accuracy). We examined two factors predictive of CLJ accuracy: topic difficulty and student mastery of the topics. Regarding topic difficulty, a hard-easy effect was observed, such that more difficult topics produced greater overestimation and easier topics produced more underestimation. A hard-easy effect also extended to absolute bias: difficult topics produced larger deviations from accuracy than easy topics did. Regarding student mastery of topics, we found that lower mastery predicted CLJ overestimation and higher mastery predicted CLJ underestimation. Lower mastery was also associated with larger absolute bias. Compared to global judgments, CLJs were less accurate, although students were more confident in their CLJs. In sum, developing methods to improve the accuracy of CLJs in classrooms is an important direction for future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call