Abstract

Modeling a student's knowledge state while she is solving exercises is a crucial stepping stone towards providing better personalized learning experiences at scale. This task, also referred to as "knowledge tracing", has been explored extensively on exercises where student submissions fall into a finite discrete solution space, e.g. a multiple-choice answer. However, we believe that rich information about a student's learning is captured within their responses to open-ended problems with unbounded solution spaces, such as programming exercises. In addition, sequential snapshots of a student's progress while she is solving a single exercise can provide valuable insights into her learning behavior. In this setting, creating representations for a student's knowledge state is a challenging task, but with recent advances in machine learning, there are more promising techniques to learn representations for complex entities. In our work, we feed the embedded program submissions into a recurrent neural network and train it on the task of predicting the student's success on the subsequent programming exercise. By training on this task, the model learns nuanced representations of a student's knowledge, and reliably predicts future student performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call