Abstract
Knowledge tracing (KT) is a core task in an intelligent education system. It is designed to track the changes in students’ knowledge states during practice, and further predict the accuracy of their answers in the next round of exercises. Current knowledge tracing models generally focus on solving short-sequence problems, and still have significant limitations when handling long-sequence problems. Additionally, there is considerable room for improvement in the experimental performance of existing models on sparse datasets. This paper proposes a BPSKT knowledge tracing model that combines BERT pre-training and sparse attention. It extracts long-sequence node information features through two layers of GCN, and then fine-tunes the pre-trained data using sparse attention BERT to adapt it for downstream tasks. Finally, through a series of validation experiments, the logical consistency and structural effectiveness of BPSKT are gradually verified.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have