Abstract

In online education platforms, accurately predicting student performance is essential for timely dropout prevention and interventions for at-risk students. This task is made difficult by the prevalent use of Multiple-Choice Questions (MCQs) in learnersourcing platforms, where noise in student-generated content and the limitations of existing unsigned graph-based models, specifically their inability to distinguish the semantic meaning between correct and incorrect responses, hinder accurate performance predictions. To address these issues, we introduce the Large Language Model enhanced Signed Bipartite graph Contrastive Learning (LLM-SBCL) model—a novel Multimodal Model utilizing Signed Graph Neural Networks (SGNNs) and a Large Language Model (LLM). Our model uses a signed bipartite graph to represent students’ answers, with positive and negative edges denoting correct and incorrect responses, respectively. To mitigate noise impact, we apply contrastive learning to the signed graphs, combined with knowledge point embeddings from the LLM to further enhance our model’s predictive performance. Upon evaluating our model on five real-world datasets, it demonstrates superior accuracy and stability, exhibiting an average F1 improvement of 3.7% over the best baseline models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call