Abstract

Although learners' judgments of their own learning are crucial for self-regulated study, judgment accuracy tends to be low. To increase accuracy, we had participants make combined judgments. In Experiment 1, 247 participants studied a ten-chapter expository text. In the simple judgments group, participants after each chapter rated the likelihood of answering correctly a knowledge question on that chapter (judgment of learning; JOL). In the combined judgments group, participants rated text difficulty before making a JOL. No accuracy differences emerged between groups, but a comparison of early-chapter and late-chapter judgment magnitudes showed that the judgment manipulation had induced cognitive processing differences. In Experiment 2, we therefore manipulated judgment scope. Rather than predicting answers correct for an entire chapter, another 256 participants rated after each chapter the likelihood of answering correctly a question on a specific concept from that chapter. Both judgment accuracy and knowledge test scores were higher in the combined judgments group. Moreover, while judgment accuracy dropped to an insignificant level between early and late chapters in the simple judgments group, accuracy remained constant with combined judgments. We discuss implications for research into metacomprehension processes in computer-supported learning and for adaptive learner support based on judgment prompts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.