Abstract
The advances of generative AI have posed a challenge for using open-ended questions to assess conceptual learning outcomes, as it is increasingly common for students to use tools like ChatGPT to generate long textual answers. However, teachers still have to spend substantial time reading the answers and inferring students' learning outcomes. We present TreeQuestion, a human-in-the-loop system designed to help teachers create a set of multiple-choice questions to assess students' conceptual learning outcomes. When a teacher seeks to assess students' comprehension of specific concepts, TreeQuestion taps into the wealth of knowledge embedded within large language models and generates a set of multiple-choice questions organized in a tree-like structure. We evaluated TreeQuestion with 96 students and 10 teachers. Results indicated that students achieved similar performance in multiple-choice questions generated by TreeQuestion and open-ended questions graded by teachers. Meanwhile, TreeQuestion could reduce teachers' efforts in creating and grading the multiple-choice questions in contrast to manually generated open-ended questions. We estimate that in a hypothetical class with 20 students, using multiple-choice questions from TreeQuestion may require only 4.6% of the time compared to open-ended questions for assessing learning outcomes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the ACM on Human-Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.