Abstract

Online judge (OJ) systems are becoming increasingly popular in various applications such as programming training, competitive programming contests and even employee recruitment, mainly due to their ability of automatic evaluation of code submissions. In higher education, OJ systems have been extensively used in programming courses because the automatic evaluation feature can drastically reduce the grading workload of instructors and teaching assistants and thereby makes the class size scalable. However, in our teaching we feel that existing OJ systems should improve their ability on giving feedback to students and teachers, especially on code errors and knowledge states. The lack of such automatic feedback increases teachers’ involvement and thus prevents college programming training from being more scalable. To tackle this challenge, we leverage historical student data obtained from our OJ system and implement two automated functions, namely, code error prediction and student knowledge tracing, using machine learning models. We demonstrate how students and teachers may benefit from the adoption of these two functions during programming training.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.