Abstract

Question-response representation (aka. embedding) lies at the core for knowledge tracing, which is pivotal to model students' evolving knowledge states for predicting their future performance. Existing efforts typically learn question-response representations in a supervised manner, thus inevitably neglecting to excavate implicit information existed in student-question-concept connections and suffering from the issue of data sparsity due to rare student-question interactions. Inspired by the recent success of self-supervised learning, in this paper, we propose a Question-Response representation with dual-level Contrastive Learning (QRCL) for improving knowledge tracing, to address the aforementioned problems. Our approach comprehensively considers three types of views for obtaining question-response representations: the native fold views derived from native interaction between students and questions, the augmented fold views generated by contrastive augmentation with Singular Value Decomposition (SVD), and the co-response relation views acquired by means of correlation among questions. By using delicately designed contrastive rules, Graph Neural Networks (GNNs) are adopted as backbone to encode the information from each view for better generating question-response representation. Extensive experimental results conducted on five datasets clearly show our proposed method has the capability in effectively predicting students' performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call