Abstract

Learning high-quality vector representations (aka. embeddings) of educational questions lies at the core of knowledge tracing (KT), which defines a task of estimating students’ knowledge states by predicting the probability that they correctly answer questions. Although existing KT efforts have leveraged question information to achieve remarkable improvements, most of them learn question embeddings by following the supervised learning paradigm. In this article, we propose a novel question embedding pre-training method for improving knowledge tracing with self-supervised Bi -graph Co -contrastive learning ( BiCo ). Technically, on the basis of self-supervised learning paradigm, we first select two similar but distinct views (i.e., representing objective and subjective semantic perspectives) as the semantic source of question embeddings. Then, we design a primary task (structure recovery) together with two auxiliary tasks (question difficulty recovery and contrastive learning) to further enhance the representativeness of questions. Finally, extensive experiments conducted on two real-world datasets show BiCo has a higher expressive power that enables KT methods to effectively predict students’ performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call