Abstract

Learning high-quality vector representations (aka. embeddings) of educational questions lies at the core of knowledge tracing (KT), which defines a task of estimating students’ knowledge states by predicting the probability that they correctly answer questions. Although existing KT efforts have leveraged question information to achieve remarkable improvements, most of them learn question embeddings by following the supervised learning paradigm. In this article, we propose a novel question embedding pre-training method for improving knowledge tracing with self-supervised Bi -graph Co -contrastive learning ( BiCo ). Technically, on the basis of self-supervised learning paradigm, we first select two similar but distinct views (i.e., representing objective and subjective semantic perspectives) as the semantic source of question embeddings. Then, we design a primary task (structure recovery) together with two auxiliary tasks (question difficulty recovery and contrastive learning) to further enhance the representativeness of questions. Finally, extensive experiments conducted on two real-world datasets show BiCo has a higher expressive power that enables KT methods to effectively predict students’ performances.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.