Abstract

Programming online judges (POJs) have been increasingly used in CS1 classes, as they allow students to practice and get quick feedback. For instructors, it is a useful tool for creating assignments and exams. However, selecting problems in POJs is time-consuming. First, problems are generally not organised based on topics covered in the CS1 syllabus. Second, assessing whether problems require similar effort to be completed and map onto the same topic is a subjective and expert-dependent task. The difficulty increases if the instructor must create variations of these assessments, e.g. to avoid plagiarism. Thus, here we research how to support CS1 instructors in the task of selecting problems, to compose one-size-fits-all or personalised assignments/exams. Our solution is to propose a novel intelligent recommender system, based on a fine-grained data-driven analysis of the students' effort on solving problems in the IDE of a POJ system, and automatic detection of topics for CS1 problems, based on problem descriptions. Data collected from 2714 students are processed to support, via our AI-method recommendations, the instructors' decision-making process. We evaluated our method against the state of the art, in a simple blind experiment with CS1 instructors (N = 35). Results show that our recommendations are 88% accurate, surpassing our baseline ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p&lt; 0.05$</tex-math></inline-formula> ). Finally, our work paves the way for novel POJ smart learning environments, wherein instructors define learning tasks (assignments/exams) supported by AI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call